Sunday, July 29, 2018

iptables and Traffic between 2 Host-Only interfaces on KVM using forward mode='open'

iptables and Traffic between 2 Host-Only interfaces on KVM using forward mode='open'



LIbvirt allows for the forward mode of 'nat' and 'route' to be used while defining the networks. In this example the virtual network named 'nat' is having a forward mode of the 'nat' and is the network using which the VMs reach to the internet. 

As libvirt creates the iptables firewall rules when the networks are defined, if there are 2 normal host-only networks, they work fine, in the sense that these allow the Virtual machines to communicate among themselves within the host-only network. 

Again the host-only network wont be able to talk to the other host-only networks and vice-versa.

In addition to the forward modes as above, there is one more that is provided by libvirt that is 'mode=open'. When a network is defined using the forward mode as 'open' as in the examples below, then libvirt does not set the iptables blocking rules for these networks and you can have your own set of iptables rules control these networks. 


Thet networks configured on the Hypervisor

virsh net-list

 Name                 State      Autostart     Persistent
----------------------------------------------------------
 nat                  active     yes           yes
 net20                active     yes           yes
 net21                active     yes           yes


Details of each network

The NAT network

NOTE: this network is connected to 'eno49' network of the KVM Hypervisor host. 'eno49' network interface is the only physical network on the Hypervisor and that has the access to the internet via a PROXY. 


KVM ensures that when this network is attached to a VM, and as an IP has been set to the subnet (172.16.0.0/16 subnet) with GW as 172.16.0.1, the natting of the outgoing traffic to internet from the VM is natted.


[root@win2k12r2 ~]# virsh net-dumpxml nat
<network connections='7'>
  <name>nat</name>
  <uuid>04640e45-0095-4877-a9bc-31970f8aa9d6</uuid>
  <forward dev='eno49' mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
    <interface dev='eno49'/>
  </forward>
  <bridge name='virbr2' stp='on' delay='0'/>
  <mac address='52:54:00:0a:89:16'/>
  <domain name='nat'/>
  <ip address='172.16.0.1' netmask='255.255.0.0'>
  </ip>
</network>

The remaining 2 networks being used here as 'nat20' and 'nat21' are Host-only network but have the forward mode as open.

nat20 has the subnet as 172.20.0.0/16 
nat21 has the subnet as 172.21.0.0/16 


The XML files for these networks are as 

net20

[root@win2k12r2 ~]# virsh net-dumpxml net20
<network connections='2'>
  <name>net20</name>
  <uuid>e9579155-77cf-4823-a2a1-b7605b8d332c</uuid>
  <forward mode='open'/>
  <bridge name='br20' stp='on' delay='0'/>
  <mac address='52:54:00:0d:8c:95'/>
  <ip address='172.20.0.1' netmask='255.255.0.0'>
    <dhcp>
      <range start='172.20.0.2' end='172.20.255.222'/>
    </dhcp>
  </ip>
</network>

[root@win2k12r2 ~]#

net21

[root@win2k12r2 ~]# virsh net-dumpxml net21
<network connections='2'>
  <name>net21</name>
  <uuid>0dc5556c-1082-40da-b746-034c0307b351</uuid>
  <forward mode='open'/>
  <bridge name='br21' stp='on' delay='0'/>
  <mac address='52:54:00:f4:f8:f5'/>
  <ip address='172.21.0.1' netmask='255.255.0.0'>
    <dhcp>
      <range start='172.21.0.2' end='172.21.255.254'/>
    </dhcp>
  </ip>
</network>

[root@win2k12r2 ~]#

Deploy threee Virtual Macines, server1, server2 and server3 such that there connectivity is as

server1 - only to network 'net20' and IP of 172.20.0.11/16

server2 - only to network 'net21' and IP of 172.21.0.21/16

server3 - 3 Networks - nat, net20 and net21 such that  on server3 

eth0 - nat Network - 172.16.254.222/16
eth1- 'net20' network - 172.20.0.31/16
eth2- 'net21' network - 172.21.0.31/16 

Very important to note that for the server3, the 'nat' network of the KVM is attached. Also server3 has the default gateway set as the 172.16.0.1 which is the NAT network interface on the KVM. This allow default access to the internet and Namelookups for server3. 


As the 'nat' network functions on the KVM, any VM having that interface and having the NAT network interface IP on the KVM gets the default access to the internet.

Here the server1 and server2 are not connected to the KVM 'nat' interface hence they do not have default access to the internet. These are connected to KVM networks 'nat20' and 'nat21' interfaces respectively, that are host-only networks. 

server3 iptables will be used to allow traffic flow between server1 and server2 and also access to the internet to these virtual machines via server3.

In my case all these Virtual Machines use the CentOS7.x 

This is how the network associated to the Virtual Machines look from the KVM Hypervisor.

[root@win2k12r2 ]# for virtualmachine in server{1..3}; do echo "Virtual machine $virtualmachine"; virsh domiflist $virtualmachine ; done

Virtual machine server1
Interface  Type       Source     Model       MAC
-------------------------------------------------------
eth0       network    net20      virtio      52:54:00:41:22:e0


Virtual machine server2
Interface  Type       Source     Model       MAC
-------------------------------------------------------
vnet5      network    net21      rtl8139     52:54:00:08:d7:b3

Virtual machine server3
Interface  Type       Source     Model       MAC
-------------------------------------------------------
vnet4      network    nat        virtio      52:54:00:c5:ad:e5
eth1       network    net20      virtio      52:54:00:fd:dd:88
eth2       network    net21      virtio      52:54:00:b5:75:6c

Note: In this example iptables and firewalld are disabled on server1 and server2. On the server3 that will  be doing the routing and natting we have iptables installed. 'systemctl' is used on the server3 so as start and enable 'iptables.service' and disabled and stop 'firewalld' if already found running.

This is how the setup is 



On the server3 

IP address configuration of the interfaces 

[root@server3 ~]# ip -4 a s eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 172.16.254.222/16 brd 172.16.255.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
[root@server3 ~]#
[root@server3 ~]# ip -4 a s eth1
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 172.20.0.31/16 brd 172.20.255.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
[root@server3 ~]#
[root@server3 ~]# ip -4 a s eth2
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 172.21.0.31/16 brd 172.21.255.255 scope global noprefixroute eth2
       valid_lft forever preferred_lft forever
[root@server3 ~]#

See the routing tables on the server server3

[root@server3 ~]# ip r s
default via 172.16.0.1 dev eth0 proto static metric 103
172.16.0.0/16 dev eth0 proto kernel scope link src 172.16.254.222 metric 103
172.20.0.0/16 dev eth1 proto kernel scope link src 172.20.0.31 metric 101
172.21.0.0/16 dev eth2 proto kernel scope link src 172.21.0.31 metric 102

The Routing tables are self-explnatory 


Enable ipv4 forwarding on server3

echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf 
sysctl -w 

Confirm forwarding is enabled 

sysctl -a | grep net.ipv4.ip_forward 
net.ipv4.ip_forward = 1

Install and initially configure iptables with no-rules 

post this install 'iptables'. 
Post that start and enable iptables and stop and disable firewalld if already running.

yum -y install iptables 
systemctl start iptables.service 
systemctl enable iptables.service 

Flush the iptables to start with to ensure all the connections as needed are working 

iptables -F 
iptables-save > /etc/sysconfig/iptables 
systemctl restart iptables 

Here on server 2 ensure that there are no iptables and firewalld running.

To make things simplere, we do not install or enable iptables and firewalld on server1 and server2 

on server1 and server2: yum -y remove iptables firewalld 

Set the default GWs on The server1 and server2 

These are just adding the GWs temporarily on the servers. To make these persist across reboots ensure that the /etc/sysconfig/network-scripts/ifcfg-ethX file on the servers as the GATEWAY=X.X.X.X entry

On Server1 add the default GW as the 172.20.0.31 (this is the IP of the eth1 interface on the server3)

ip route add 0.0.0.0/0 via 172.20.0.31 dev eth0


On Server2 add the default GW as 172.21.0.31 (this is the IP of the eth2 interface on the server3)

ip route add 0.0.0.0/0 via 172.21.0.31 dev eth0

After these settings all the outgoing traffics from these servers will reach to the corresponding subnet interface on server3.

IPTABLES Rules on the server3 

We will finally put the deny rules, first we are putting the allow rules

Allow the incoming SSH, HTTP, HTTPS, MYSQL and the corresponding outgoing rules on the server3 from 172.16.0.0/16, 172.20.0.0/16 and 172.21.0.0/16 subnets. (Please note that 172.16.0.0/16 is the NATTED Subnet)

These 3 Rules allow incoming SSH,HTTP,HTTPS,MYSQL and port 8080 on the interfaces of server3 from the corresponding subnet 

iptables -A INPUT -p tcp -s 172.16.0.0/16 -i eth0 --match multiport --destination-port 22,80,443,8080,3306 -j ACCEPT 

iptables -A INPUT -p tcp -s 172.20.0.0/16 -i eth1 --match multiport --destination-port 22,80,443,8080,3306 -j ACCEPT 

iptables -A INPUT -p tcp -s 172.21.0.0/16 -i eth2 --match multiport --destination-port 22,80,443,8080,3306 -j ACCEPT 

Allow all outgoing traffics that originate at the Server3 itself 

iptables -A OUTPUT -m conntrack --ctstate ESTABLISHED,RELATED  -j ACCEPT

iptables -A OUTPUT -m conntrack  --ctstate NEW,ESTABLISHED,RELATED -j ACCEPT


Allow the incoming PING traffic on all the interfaces on server3 and the corresponding reply for the same

iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT 
iptables -A OUTPUT -p icmp --icmp-type echo-reply -j ACCEPT 


Now the IPTABLES rules for the FORWARDING TRAFFIC between the eth1 and eth2 on the server3. 


Here is how this looks schematically.


Here we allow server3 to foward the traffic from the 172.20.0.0/16 subnet (eth1 interface 172.20.0.31) to 172.21.0.0/16 subnet (eth2 interface on server3 has IP of 172.21.0.31) and vice-versa.

To Forward the HTTP and HTTPS traffic from server1 subnet (172.20.0.0/16) to  the subnet of the server2 (172.21.0.0/16 where web service HTTPD is running) also SSH port 22 TCP.

iptables -A FORWARD -p tcp --match mulltiport --destination-port 22,80,443 -i eth1 -s 172.20.0.0/16 -d 172.21.0.0/16 -j ACCEPT 


Forward the packets from server2 subnet (172.21.0.0/16) to reach the SSH Port 22 and MYSQL port 3306/TCP to server1 subnet (172.20.0.0/16) 

iptables -A FORWARD -p tcp --match mulltiport --destination-port 22,3306 -i eth1 -s 172.20.0.0/16 -d 172.21.0.0/16 -j ACCEPT 

Now we put the block rules in the OUTPUT  as well as FORWARD chains

iptables -A OUTPUT  -s 0.0.0.0/0 -j REJECT
iptables -A FORWARD -j REJECT

This is how the IPTABLES rules look on server3

[root@server3 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     icmp --  anywhere             anywhere             icmp echo-reply
ACCEPT     tcp  --  172.16.0.0/16        anywhere             multiport dports ssh,https,webcache,mysql,http
ACCEPT     tcp  --  172.20.0.0/16        anywhere             multiport dports ssh,https,webcache,mysql,http
ACCEPT     tcp  --  172.21.0.0/16        anywhere             multiport dports ssh,https,webcache,mysql,http
ACCEPT     udp  --  172.20.0.0/16        anywhere             udp dpt:domain
ACCEPT     udp  --  172.21.0.0/16        anywhere             udp dpt:domain

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
ACCEPT     tcp  --  172.20.0.0/16        172.21.0.0/16        multiport dports ssh,https,http
ACCEPT     tcp  --  172.21.0.0/16        172.20.0.0/16        multiport dports ssh,mysql

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere             ctstate NEW,RELATED,ESTABLISHED
ACCEPT     icmp --  anywhere             anywhere             icmp echo-request
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable

Chain L (0 references)
target     prot opt source               destination
[root@server3 ~]#


Internet and DNS Traffic from server1 and server2 via server3


For the Servers server1 and server2 to reach to the internet and get the YUM packages from there.

1) server1 and server2 to first to be able to reach to the DNS server for name queries reply for the CentOS mirrors. The DNS server in this case is 172.16.0.1 

2) Once the name resolution is fine, then the servers server1 and server2 to be able to reach to these servers. The internet access for all the networks is via the internet proxy server that is working at the port 8080

3) The Server server3 has to perform the masqurading for the requests for the outgoing DNS and YUM Fetch traffic for the servers server1 and server2.

Here is how the above are achieved by setting the rules on the server3.

This is how this looks 




This is actually done by masquerading on the port eth0 of the server3 that is the interface of the server3 that is attached to the 'nat' network of the KVM. This means that all requests to the internet from the server2 (coming on interface eth2 of server3) and server1 (request coming on interface eth1 of the server3) will be sent out from server3 off the interface eth0. These traffic once out of the server3 will seem to be coming from the IP address of eth0 interface of the server3.



for the onward traffic set the MASQUERADING

Prior to masquerading the traffic internally allowed to reach eth0 of server3. The NEW connections from eth0 to outgo has to be forwarded, and the return incoming of the same that is ESTABLISHED and RELATED are to be allowed.

iptables -A FORWARD -o eth0 -m state --state NEW,ESTABLISHED,RELATED
iptables -A FORWARD -i eth0 -m state --state ESTABLISHED,RELATED


Set the Masquerading of the intended traffic to be going out from eth0 of server3 

iptables -A -t nat -A POSTROUTING -s 172.20.0.0/16  -o eth0 -j MASQUERADE 
iptables -A -t nat -A POSTROUTING -s 172.21.0.0/16  -o eth0 -j MASQUERADE

This is how the iptables look like 

The Filter TABLE 

[root@server3 ~]# iptables -L --line-numbers
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination
1    ACCEPT     icmp --  anywhere             anywhere             icmp echo-reply
2    ACCEPT     tcp  --  172.16.0.0/16        anywhere             multiport dports ssh,https,webcache,mysql,http
3    ACCEPT     tcp  --  172.20.0.0/16        anywhere             multiport dports ssh,https,webcache,mysql,http
4    ACCEPT     tcp  --  172.21.0.0/16        anywhere             multiport dports ssh,https,webcache,mysql,http
5    ACCEPT     udp  --  172.20.0.0/16        anywhere             udp dpt:domain
6    ACCEPT     udp  --  172.21.0.0/16        anywhere             udp dpt:domain
Chain FORWARD (policy ACCEPT)
num  target     prot opt source               destination
1    ACCEPT     tcp  --  172.20.0.0/16        172.21.0.0/16        multiport dports ssh,https,http
2    ACCEPT     tcp  --  172.21.0.0/16        172.20.0.0/16        multiport dports ssh,mysql
3    ACCEPT     all  --  anywhere             anywhere             state NEW,RELATED,ESTABLISHED
4    ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination
1    ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
2    ACCEPT     all  --  anywhere             anywhere             ctstate NEW,RELATED,ESTABLISHED
3    ACCEPT     icmp --  anywhere             anywhere             icmp echo-request
4    REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable
Chain L (0 references)
num  target     prot opt source               destination



Here is the NAT Table

[root@server3 ~]# iptables -t nat -L --line-numbers
Chain PREROUTING (policy ACCEPT)
num  target     prot opt source               destination

Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
num  target     prot opt source               destination
1    MASQUERADE  all  --  172.20.0.0/16        anywhere
2    MASQUERADE  all  --  172.21.0.0/16        anywhere
[root@server3 ~]#


Install the services on the servers now 


On server1 install mariadb 

Mariadb : Configured to run on the default port 3306 and bind to address 172.20.0.11 


yum -y install mariadb

This is how the /etc/my.cnf looks 

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
port=3306
bind-address=172.20.0.11
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd
[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d

Start and Enable mariadb service 

systemctl restart mariadb 
systemctl enable mariadb 

Ensure that mariadb is up and running and bound to the IP address of the server1

[root@server1 network-scripts]# systemctl status mariadb
● mariadb.service - MariaDB database server
   Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2018-07-25 01:11:41 EDT; 2 days ago
 Main PID: 4071 (mysqld_safe)
   CGroup: /system.slice/mariadb.service
           ├─4071 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
           └─4260 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/my...

Jul 25 01:11:39 server1.sujitnet11.net systemd[1]: Starting MariaDB database server...
Jul 25 01:11:39 server1.sujitnet11.net mariadb-prepare-db-dir[4041]: Database MariaDB is probably initi....
Jul 25 01:11:39 server1.sujitnet11.net mariadb-prepare-db-dir[4041]: If this is not the case, make sure....
Jul 25 01:11:39 server1.sujitnet11.net mysqld_safe[4071]: 180725 01:11:39 mysqld_safe Logging to '/var...'.
Jul 25 01:11:39 server1.sujitnet11.net mysqld_safe[4071]: 180725 01:11:39 mysqld_safe Starting mysqld ...ql
Jul 25 01:11:41 server1.sujitnet11.net systemd[1]: Started MariaDB database server.
Hint: Some lines were ellipsized, use -l to show in full.
[root@server1 network-scripts]#



[root@server1 network-scripts]# ss -tunlp | grep mysql
tcp    LISTEN     0      50     172.20.0.11:3306                  *:*                   users:(("mysqld",pid=4260,fd=14))
[root@server1 network-scripts]#

Additionally you can further fo 'mysql_secure_installation' and ensure that password based login is allowed to the users to access the DB.

Post that we create a user called 'root' in mysql and give that permission to local login as well as login from any server.

Give the root user all privileges

MariaDB [mysql]> grant all privileges on *.* to 'root'@'localhost' identified by '<PASSWORD HERE>';
MariaDB [mysql]> grant all privileges on *.* to 'root'@'%' identified by <PASSWORD_HERE>';
MariaDB [mysql]> flush privileges;

MariaDB [mysql]> select Host,User from user;
+------------------------+------+
| Host                   | User |
+------------------------+------+
| %                      | root |
| 127.0.0.1              | root |
| ::1                    | root |
| localhost              | root |
| server1.sujitnet11.net | root |
+------------------------+------+

On server2 install Apache (httpd for CentOS)

yum -y install httpd 

in the /etc/httpd/conf/http.conf 

replace 

Listen 80 

with 

Listen 172.21.0.21:80

Start and Enable httpd 

systemctl start httpd 
systemctl enable httpd

Ensure HTTPD service is up and running and the httpd service listening on default port 80 on the interface of the server 


[root@server2 ~]# ss -tunlp | grep 80
tcp    LISTEN     0      128    172.21.0.21:80                    *:*                   users:(("httpd",pid=2953,fd=3),("httpd",pid=2952,fd=3),("httpd",pid=2951,fd=3),("httpd",pid=2858,fd=3),("httpd",pid=2857,fd=3),("httpd",pid=2856,fd=3),("httpd",pid=2855,fd=3),("httpd",pid=2854,fd=3),("httpd",pid=2853,fd=3))
[root@server2 ~]#

[root@server2 ~]# systemctl status httpd
● httpd.service - The Apache HTTP Server
   Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-07-24 05:55:00 EDT; 2 days ago
     Docs: man:httpd(8)
           man:apachectl(8)
 Main PID: 2853 (httpd)
   Status: "Total requests: 9; Current requests/sec: 0; Current traffic:   0 B/sec"
   CGroup: /system.slice/httpd.service
           ├─2853 /usr/sbin/httpd -DFOREGROUND
           ├─2854 /usr/sbin/httpd -DFOREGROUND
           ├─2855 /usr/sbin/httpd -DFOREGROUND
           ├─2856 /usr/sbin/httpd -DFOREGROUND
           ├─2857 /usr/sbin/httpd -DFOREGROUND
           ├─2858 /usr/sbin/httpd -DFOREGROUND
           ├─2951 /usr/sbin/httpd -DFOREGROUND
           ├─2952 /usr/sbin/httpd -DFOREGROUND
           └─2953 /usr/sbin/httpd -DFOREGROUND

Jul 24 05:55:00 server2.sujitnet11.net systemd[1]: Starting The Apache HTTP Server...
Jul 24 05:55:00 server2.sujitnet11.net systemd[1]: Started The Apache HTTP Server.
[root@server2 ~]#


Try accessing the services from the each other server.

from server1: elinks http://172.21.0.21

and 

from server2: mysql -h 172.20.0.11 -u root -p -P 3306 


Note: you might need to install the 'elinks' and 'mariadb-clients' are installed on the servers where from you try these tests.

Sunday, July 1, 2018

Kubernetes 1.10.0 multi-master installation with 3 Masters and 3 Slaves on CentOS7 with SSL - Configuring the Masters not run PODs


If you want the kubernetes master isolation.
Kubernetes Master Isolation does not allow PODs to be scheduled on the master. This is when the masters and the nodes have been identically configured to potentially allow scheduling of PODs on them by kubernetes. And you plan that the masters should not be allowed to schedule the PODs created by kubernetes to be run on them.
The below removes the taint so as the PODs can be scheduled on the master as well.

kubectl taint nodes --all node-role.kubernetes.io/master-
Remove the taints on the master so that you can schedule pods on it.

kubectl taint nodes --all node-role.kubernetes.io/master-
It should return the following.
node "<your-hostname>" untainted
Confirm that you now have a node in your cluster with the following command.
kubectl get nodes
Installation of Heapster and Kibana or similar tools

To be done later.

Kubernetes 1.10.0 multi-master installation with 3 Master and 3 Slave nodes installation and configuration with CentOS7 and SSL - Installation of kube-dns and testing


Creation of kubernetes kube-dns service


Here this file is edited to set the value of the clutserIP for the DNS service.

clusterIP: 100.65.0.10

kubectl create -f kube-dns.yaml


service "kube-dns" created
serviceaccount "kube-dns" created
configmap "kube-dns" created
deployment.extensions "kube-dns" created



Testing if kube-dns service is working properly
Please note that as JQ command is used, in the minimal install of CentOS7 JQ is not there. Also this is available from the EPEL repo for CentOS.
Installation of JQ
yum -y install epel-release
yum clean all
rm -rf /var/cache/yum
yum -y install jq
yum -y remove epel-release
yum clean all
rm -rf /var/cahce/yum
Creation of a BusyBox deployment to test the DNS
kubectl run busybox --image=busybox --command -- sleep 3600
kubectl get pods -l run=busybox

POD_NAME=$(kubectl get pods -l run=busybox -o json | jq .items[0].metadata.name| tr -d \")
echo $POD_NAME
kubectl exec -ti $POD_NAME -- nslookup kubernetes
You can see that the service names are getting resolved here.

Server:    100.65.0.10
Address 1: 100.65.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 100.65.0.1 kubernetes.default.svc.cluster.local

Resolving external names
[root@kuben1 ~]# kubectl exec -ti $POD_NAME -- nslookup google.com
Server:    100.65.0.10
Address 1: 100.65.0.10 kube-dns.kube-system.svc.cluster.local

Name:      google.com
Address 1: 2404:6800:4005:808::200e hkg12s13-in-x0e.1e100.net
Address 2: 216.58.203.46 hkg12s10-in-f46.1e100.net

Kubernetes 1.10.0 multi-master installation with 3 Masters and 3 Slaves installation and Configuration on CentOS7 with SSL - Installation and Configuration of the Kubernetes slave nodes with docker and flanneld


Configuring the Kubernetes Nodes

The Kubernets slave nodes are kuben{1..3}
Install the kubernetes node binaries on the slaves
for U in kuben{1..3}
do
ssh $U yum -y install kubernetes-node kubernetes-client
done

If not installed previously install on the worker servers the following RPMs.
yum -y install socat conntrack ipset

Please note that the installation of kubernetes-node installs the above RPMs as the dependencies.

Create the following directories
mkdir -p \
  /etc/cni/net.d \
  /opt/cni/bin \
  /var/lib/kubelet \
  /var/lib/kube-proxy \
  /var/lib/kubernetes \
  /var/run/kubernetes
Installation and configuration of flanneld and docker on the nodes
Installation and Configuration of docker on the kubernetes nodes

Install docker on the Slave nodes. (If later on if you want the containers also to get scheduled by Kubernetes on the master nodes, which may not be suggestible in a production environment though, you can also install the docker binaries on the master servers).

Install docker, bridge-utils and Enable docker for autostart.

yum -y install docker bridge-utils
systemctl enable docker

If your systems that will be running docker containers from Kubernetes need a proxy to access the internet to fetch the images from the Docker HUB or public repositories, you need to configure docker proxies.

Please note that this is needed *only if* you plan to use the internet repositories and at the same time you have the systems can only access internet via some HTTP proxy.

In my case the systems had been behind a proxy and so I had to update the proxy settings for the docker service.

rm -rf /etc/systemd/system/docker.service.d/
mkdir -p /etc/systemd/system/docker.service.d/
cat [Service]
Environment="http_proxy=http://myownproxy.mydomain.net:8080"
Environment="https_proxy=http://myownproxy.mydomain.net:8080"
 > /etc/systemd/system/docker.service.d/override.conf

systemctl daemon-reload
systemctl restart docker

Installation of flanneld
            yum -y install flannel
Flanneld as a service
The systemctl configuration file for flanneld service.
cat << EOF > /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Network fabric for containers
Documentation=https://github.com/coreos/flannel
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
Restart=always
RestartSec=5
ExecStart=/usr/bin/flanneld \\
  --etcd-endpoints=https://172.16.254.221:2379,https://172.16.254.222:2379,https://172.16.254.223:2379 \\
  --logtostderr=true \\
  --ip-masq=true \\
  --subnet-dir=/var/lib/flanneld/networks \\
  --subnet-file=/var/lib/flanneld/subnet.env \\
  --etcd-cafile=/srv/kubernetes/ca.pem \\
  --etcd-certfile=/srv/kubernetes/kubernetes.pem \\
  --etcd-keyfile=/srv/kubernetes/kubernetes-key.pem \\
  --etcd-prefix=/atomic.io/network \\
  --etcd-username=admin \\
  --ip-masq
[Install]
WantedBy=multi-user.target
Start the flanneld service
systemctl daemon-reload
systemctl restart flanneld
systemctl status flanneld
ip a s


Integration of docker with flanneld
Docker has to use the subnet being provided by flannled to be used as the POD network. Hence this integration is required.
On the nodes ensure that the docker service has the masquerading is set to false.
sed -i -r -e "s|^OPTIONS=.*|OPTIONS='--log-driver=journald --signature-verification=false --iptables=false --ip-masq=false --ip-forward=true'|g" /etc/sysconfig/docker
Tune the docker systemctl service file so as to use the flanneld file =-/var/lib/flanneld/subnet.env as the environment file, and take the variables of flanneld subnets from there and use them, as the docker systemd service would start on the nodes.

Update the docker service to see the environment /var/lib/flanneld where flannel keeps the subnet information for the node.

echo '[Service]
EnvironmentFile=-/var/lib/flanneld/' > /usr/lib/systemd/system/docker.service.d/flannel.conf

Update the docker systemd unit file to use the environment files at /var/lib/flanneld/*

echo '[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target rhel-push-plugin.socket registries.service After=network-online.target docker.socket flanneld.service
Wants=network-online.target docker-storage-setup.service
Requires=docker-cleanup.timer
[Service]
Type=notify
NotifyAccess=all
EnvironmentFile=-/run/containers/registries.conf
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
EnvironmentFile=-/var/lib/flanneld/subnet.env
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecStart=/usr/bin/dockerd-current \
          --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
          --default-runtime=docker-runc \
          --exec-opt native.cgroupdriver=cgroupfs \
          --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
          --init-path=/usr/libexec/docker/docker-init-current \
          --seccomp-profile=/etc/docker/seccomp.json \
          --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} \
          $OPTIONS \
          $DOCKER_STORAGE_OPTIONS \
          $DOCKER_NETWORK_OPTIONS \
          $ADD_REGISTRY \
          $BLOCK_REGISTRY \
          $INSECURE_REGISTRY \
          $REGISTRIES
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
Restart=on-abnormal
KillMode=process

[Install]
WantedBy=multi-user.target' > /usr/lib/systemd/system/docker.service

Restart the docker service.
systemctl daemon-reload
systemctl restart docker
systemctl status docker
ip a s


Tune the docker service so as to use cgroupfs as the cgrpoup driver for docker.

Set the cgroup driver to be cgroupfs. This is in accrodance with the kubelet service requriment that expects on CentOS by default the cgroupdriver to be cgroupfs instead of systemd.

sed -i -r -e 's|--exec-opt native.cgroupdriver=systemd|--exec-opt native.cgroupdriver=cgroupfs|g' /usr/lib/systemd/system/docker.service
cat /usr/lib/systemd/system/docker.service | grep exec-opt

Restart the docker service.
systemctl daemon-reload
systemctl restart docker
systemctl status docker

Make a manifests directory which will be used as the manifests directory by the kubelet
mkdir -p /etc/kubernetes/manifests

Configure the Kubelet

POD_CIDR=24.24.0.0/16

cd /srv/kubernetes
{
  sudo cp $(hostname -f)-key.pem $(hostname -f).pem /var/lib/kubelet/
  sudo cp $(hostname -f).kubeconfig /var/lib/kubelet/kubeconfig
  sudo cp ca.pem /var/lib/kubernetes/
}

Create the kubelet-config.yaml configuration file:

POD_CIDR=24.24.0.0/16

cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
  mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
  - "100.65.0.10"
podCIDR: "${POD_CIDR}"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/$(hostname -f).pem"
tlsPrivateKeyFile: "/var/lib/kubelet/$(hostname -f)-key.pem"
EOF

Create the kubelet.service systemd unit file:

cat <<EOF | sudo tee /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
#This is with CNI
#ExecStart=/usr/bin/kubelet --config=/var/lib/kubelet/kubelet-config.yaml --kubeconfig=/var/lib/kubelet/kubeconfig --pod-manifest-path=/etc/kubernetes/manifests --image-pull-progress-deadline=2m --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/usr/libexec/cni --cluster-dns=100.65.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/srv/kubernetes/ca.pem --cadvisor-port=0 --cgroup-driver=cgroupfs --register-node=true --v=2
#This is without CNI
ExecStart=/usr/bin/kubelet --config=/var/lib/kubelet/kubelet-config.yaml --kubeconfig=/var/lib/kubelet/kubeconfig --pod-manifest-path=/etc/kubernetes/manifests  --image-pull-progress-deadline=2m  --allow-privileged=true --cluster-dns=100.65.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/srv/kubernetes/ca.pem --cadvisor-port=0 --cgroup-driver=cgroupfs --register-node=true --v=2


Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Restart the kubelet service on all the nodes
systemctl daemon-reload
systemctl restart kubelet
systemctl status kubelet

systemctl daemon-reload
systemctl restart kubelet
systemctl status kubelet

Configure the Kubernetes Proxy
kube-proxy is also going to be run as systemd service on the kubernetes nodes.
cd /srv/kubernetes/
sudo cp kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
Create the kube-proxy-config.yaml configuration file:

cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "24.24.0.0/16"
EOF

Create the kube-proxy.service systemd unit file:

cat <<EOF | sudo tee /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/bin/kube-proxy \\
  --config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
Start the kube-proxy service and check the status of the same
systemctl daemon-reload
systemctl restart kube-proxy
systemctl restart kube-proxy
systemctl status kube-proxy
The Admin Kubernetes Configuration File
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.

Generate a kubeconfig file suitable for authenticating as the admin user:

cd /srv/kubernetes
KUBERNETES_PUBLIC_ADDRESS=172.16.254.201
{

  kubectl config set-cluster k8s-api.sujitnet11.net \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443

  kubectl config set-credentials admin \
    --client-certificate=admin.pem \
    --client-key=admin-key.pem

  kubectl config set-context k8s-api.sujitnet11.net \
    --cluster=k8s-api.sujitnet11.net \
    --user=admin

  kubectl config use-context k8s-api.sujitnet11.net
}


Get the component status
kubectl get componentstatus


[root@kubem1 kubernetes]# kubectl get componentstatus
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}
etcd-2               Healthy   {"health": "true"}
etcd-1               Healthy   {"health": "true"}
[root@kubem1 kubernetes]#