Friday, June 9, 2017

Ansible Playbook with multiple lists in the variables with multiple with_items calls in the tasks example

Ansible Playbook with multiple lists in the variables with multiple with_items calls in the tasks

The sample playbook looks like this. This has four lists defined in the vars: section. 
First is the list of the packages to be installed name of the list being "packages" in the playbook.
Second list having name as "stopservice" is the list of the services to be stopped
Third list is the list of the services to be started also enabled with the name of the list being as "startservice"
As can be seen in the playbook the call to each of the items of a list is in the form of 

"{{item.name}}"
with_items  takes the values as
"{{packages}}" or "{{startservice}}" or "{{stopservice}}" as the name of the list



The fourth list is the list of the files to be got from a URL and to be copied to a certain location on the servers
The list name is "urls"
The name is the identifier
url is the URL wherefrom the file can be downloaded. 
dest is the location on the server on where the file has to be copied to while being received using the get_url module of ansible

here the important values to be passed to the get_url module are "url" and "dest"
the "url" and "dest" are different for each item in the list "urls"

The pairs of url and dest are passed to the get_url module as 

get_url:
  url: "{{item.url}}"
  dest: "{{item.dest}}"
with_items: "{{urls}}"

Also note the use of the "force" option of the get_url to ensure that the target file is overwritten when the contents at the source url change.

Here is the example playbook


[devops@ansible playbooks]$ cat firstplaybook.yml
---
- name: install packages
  hosts: all
  vars:
    - packages:
      - name: chrony
      - name: bash-completion
      - name: yum-utils
      - name: bind-utils
      - name: net-tools
      - name: ntp
      - name: firewalld
      - name: wget
      - name: curl
      - name: unzip

    - stopservice:
      - name: firewalld

    - startservice:
      - name: chronyd
      - name: ntpd

    - urls:
      - name: hosts
        url: http://ansible.lab.example.com/config/hosts
        dest: /etc/hosts
      - name: nsswitch.conf
        url: http://ansible.lab.example.com/config/nsswitch.conf
        dest: /etc/nsswitch.conf
      - name: profile
        url: http://ansible.lab.example.com/config/profile
        dest: /etc/profile
      - name: resolv.conf
        url: http://ansible.lab.example.com/config/resolv.conf
        dest: /etc/resolv.conf
      - name: selinuxconfig
        url: http://ansible.lab.example.com/config/selinuxconfig
        dest: /etc/selinux/config
      - name: yum.conf
        url: http://ansible.lab.example.com/config/yum.conf
        dest: /etc/yum.conf



  tasks:
    - name: install package
      yum:
        name: "{{item.name}}"
        state: latest
      with_items:
        - "{{packages}}"

    - name: stop the services
      service:
        state: stopped
        enabled: false
        name: "{{item.name}}"
      with_items:
        - "{{stopservice}}"

    - name: start and enable services
      service:
        name: "{{item.name}}"
        state: started
        enabled: true
      with_items:
        - "{{startservice}}"

    - name: copy configuration files from the URLs
      get_url:
        url: "{{item.url}}"
        dest: "{{item.dest}}"
        use_proxy: no
        force: yes
      with_items:
        - "{{urls}}"



[devops@ansible playbooks]$


----------------------------------

Run of the Playbook
----------------------------------


[devops@ansible playbooks]$ ansible-playbook firstplaybook.yml

PLAY [install packages] ********************************************************************************************

TASK [Gathering Facts] *********************************************************************************************
ok: [ansible.lab.example.com]
ok: [server2.lab.example.com]
ok: [server1.lab.example.com]
ok: [server3.lab.example.com]

TASK [install package] *********************************************************************************************
ok: [server1.lab.example.com] => (item={u'name': u'chrony'})
ok: [server2.lab.example.com] => (item={u'name': u'chrony'})
ok: [ansible.lab.example.com] => (item={u'name': u'chrony'})
ok: [server3.lab.example.com] => (item={u'name': u'chrony'})
ok: [server1.lab.example.com] => (item={u'name': u'bash-completion'})
ok: [ansible.lab.example.com] => (item={u'name': u'bash-completion'})
ok: [server2.lab.example.com] => (item={u'name': u'bash-completion'})
ok: [server3.lab.example.com] => (item={u'name': u'bash-completion'})
ok: [server1.lab.example.com] => (item={u'name': u'yum-utils'})
ok: [server2.lab.example.com] => (item={u'name': u'yum-utils'})
ok: [ansible.lab.example.com] => (item={u'name': u'yum-utils'})
ok: [server3.lab.example.com] => (item={u'name': u'yum-utils'})
ok: [server1.lab.example.com] => (item={u'name': u'bind-utils'})
ok: [server2.lab.example.com] => (item={u'name': u'bind-utils'})
ok: [ansible.lab.example.com] => (item={u'name': u'bind-utils'})
ok: [server3.lab.example.com] => (item={u'name': u'bind-utils'})
ok: [server2.lab.example.com] => (item={u'name': u'net-tools'})
ok: [server1.lab.example.com] => (item={u'name': u'net-tools'})
ok: [ansible.lab.example.com] => (item={u'name': u'net-tools'})
ok: [server3.lab.example.com] => (item={u'name': u'net-tools'})
ok: [server1.lab.example.com] => (item={u'name': u'ntp'})
ok: [server2.lab.example.com] => (item={u'name': u'ntp'})
ok: [ansible.lab.example.com] => (item={u'name': u'ntp'})
ok: [ansible.lab.example.com] => (item={u'name': u'firewalld'})
ok: [server3.lab.example.com] => (item={u'name': u'ntp'})
ok: [server1.lab.example.com] => (item={u'name': u'firewalld'})
ok: [server2.lab.example.com] => (item={u'name': u'firewalld'})
ok: [ansible.lab.example.com] => (item={u'name': u'wget'})
ok: [server3.lab.example.com] => (item={u'name': u'firewalld'})
ok: [server1.lab.example.com] => (item={u'name': u'wget'})
ok: [server2.lab.example.com] => (item={u'name': u'wget'})
ok: [ansible.lab.example.com] => (item={u'name': u'curl'})
ok: [server1.lab.example.com] => (item={u'name': u'curl'})
ok: [server3.lab.example.com] => (item={u'name': u'wget'})
ok: [ansible.lab.example.com] => (item={u'name': u'unzip'})
ok: [server2.lab.example.com] => (item={u'name': u'curl'})
ok: [server1.lab.example.com] => (item={u'name': u'unzip'})
ok: [server3.lab.example.com] => (item={u'name': u'curl'})
ok: [server2.lab.example.com] => (item={u'name': u'unzip'})
ok: [server3.lab.example.com] => (item={u'name': u'unzip'})

TASK [stop the services] *******************************************************************************************
ok: [ansible.lab.example.com] => (item={u'name': u'firewalld'})
ok: [server2.lab.example.com] => (item={u'name': u'firewalld'})
ok: [server3.lab.example.com] => (item={u'name': u'firewalld'})
ok: [server1.lab.example.com] => (item={u'name': u'firewalld'})

TASK [start and enable services] ***********************************************************************************
changed: [ansible.lab.example.com] => (item={u'name': u'chronyd'})
changed: [ansible.lab.example.com] => (item={u'name': u'ntpd'})
changed: [server2.lab.example.com] => (item={u'name': u'chronyd'})
changed: [server3.lab.example.com] => (item={u'name': u'chronyd'})
changed: [server1.lab.example.com] => (item={u'name': u'chronyd'})
changed: [server1.lab.example.com] => (item={u'name': u'ntpd'})
changed: [server3.lab.example.com] => (item={u'name': u'ntpd'})
changed: [server2.lab.example.com] => (item={u'name': u'ntpd'})

TASK [copy configuration files from the URLs] **********************************************************************
ok: [ansible.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/hosts', u'dest': u'/etc/hosts', u'name': u'hosts'})
ok: [ansible.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/nsswitch.conf', u'dest': u'/etc/nsswitch.conf', u'name': u'nsswitch.conf'})
ok: [ansible.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/profile', u'dest': u'/etc/profile', u'name': u'profile'})
ok: [server3.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/hosts', u'dest': u'/etc/hosts', u'name': u'hosts'})
ok: [server1.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/hosts', u'dest': u'/etc/hosts', u'name': u'hosts'})
ok: [server2.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/hosts', u'dest': u'/etc/hosts', u'name': u'hosts'})
ok: [ansible.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/resolv.conf', u'dest': u'/etc/resolv.conf', u'name': u'resolv.conf'})
ok: [ansible.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/selinuxconfig', u'dest': u'/etc/selinux/config', u'name': u'selinuxconfig'})
ok: [ansible.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/yum.conf', u'dest': u'/etc/yum.conf', u'name': u'yum.conf'})
ok: [server1.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/nsswitch.conf', u'dest': u'/etc/nsswitch.conf', u'name': u'nsswitch.conf'})
ok: [server3.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/nsswitch.conf', u'dest': u'/etc/nsswitch.conf', u'name': u'nsswitch.conf'})
ok: [server2.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/nsswitch.conf', u'dest': u'/etc/nsswitch.conf', u'name': u'nsswitch.conf'})
changed: [server3.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/profile', u'dest': u'/etc/profile', u'name': u'profile'})
changed: [server2.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/profile', u'dest': u'/etc/profile', u'name': u'profile'})
changed: [server1.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/profile', u'dest': u'/etc/profile', u'name': u'profile'})
ok: [server3.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/resolv.conf', u'dest': u'/etc/resolv.conf', u'name': u'resolv.conf'})
ok: [server1.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/resolv.conf', u'dest': u'/etc/resolv.conf', u'name': u'resolv.conf'})
ok: [server2.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/resolv.conf', u'dest': u'/etc/resolv.conf', u'name': u'resolv.conf'})
ok: [server1.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/selinuxconfig', u'dest': u'/etc/selinux/config', u'name': u'selinuxconfig'})
ok: [server3.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/selinuxconfig', u'dest': u'/etc/selinux/config', u'name': u'selinuxconfig'})
ok: [server2.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/selinuxconfig', u'dest': u'/etc/selinux/config', u'name': u'selinuxconfig'})
ok: [server1.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/yum.conf', u'dest': u'/etc/yum.conf', u'name': u'yum.conf'})
ok: [server2.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/yum.conf', u'dest': u'/etc/yum.conf', u'name': u'yum.conf'})
ok: [server3.lab.example.com] => (item={u'url': u'http://ansible.lab.example.com/config/yum.conf', u'dest': u'/etc/yum.conf', u'name': u'yum.conf'})

PLAY RECAP *********************************************************************************************************
ansible.lab.example.com    : ok=5    changed=1    unreachable=0    failed=0
server1.lab.example.com    : ok=5    changed=2    unreachable=0    failed=0
server2.lab.example.com    : ok=5    changed=2    unreachable=0    failed=0
server3.lab.example.com    : ok=5    changed=2    unreachable=0    failed=0

If Kubernetes 1.6.4 install on CentOS7 at the kubeadm init seems to do nothing at "[apiclient] Created API client, waiting for the control plane to become ready"

If Kubernetes 1.6.4 install on CentOS7 at the kubeadm init seems to do nothing at "[apiclient] Created API client, waiting for the control plane to become ready"

If Kubernetes 1.6.4 install on CentOS7 at the kubeadm init seems to do nothing at "[apiclient] Created API client, waiting for the control plane to become ready"

And your internet access is from behind a proxy, ensure that the docker knows the proxy settings

Edit the docker.service file at /usr/lib/systemd/system/docker.service to have the proxy defined in the Environment variables like this


Environment=http_proxy=http://proxy.YY.XXXX.net:8080
Environment=https_proxy=http://proxy.YY.XXXX.net:8080

and 

systemctl daemon-reload
systemctl restart docker 

Then run the kubeadm init clearing the previous run on the master using 

kubeadm reset

then 

kubeadm init 

again


Please Note: The docker version used precisely here was 


[root@kub2 ~]# yum list installed docker
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.nonstop.co.il
 * epel: mirror.ibcp.fr
 * extras: mirror.nonstop.co.il
 * updates: mirror.nonstop.co.il
Installed Packages
docker.x86_64             2:1.12.6-28.git1398f24.el7.centos              @extras

Kubernetes 1.6.4 installation on CentOS7

Basic install and configuration of Kubernetes 1.6.4 on CentOS7


See the latest Guide at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/


The nodes for the setup kub1 is the master node and kub{2..4} are the nodes
These servers are behind a proxy

192.168.17.183 kub1.example.com kub1
192.168.17.186 kub4.example.com kub4
192.168.17.184 kub3.example.com kub3
192.168.17.189 kub2.example.com kub2
192.168.17.190 ansible.example.com ansible

These are installed with

uname -a
Linux ansible.example.com 3.10.0-514.21.1.el7.x86_64 #1 SMP Thu May 25 17:04:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux


RedHat release

cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)

Clean up of any previous installation of kubernetes also docker if any

for u in kub{1..4}; do ssh $u "yum -y remove docker-client docker-common docker  docker-ce docker-ce-selinux docker-ce-* container-selinux* etcd flannel kubernetes kubernetes* kuebeadm kubectl container-cni etcd flannel"; done

Delete any left overs from the previous installation of kubernetes and etcd

for u in kub{1..4}; do ssh $u "rm -rf /var/lib/etcd/*"; done
for u in kub{1..4}; do ssh $u "rm -rf /var/lib/kubelet/*"; done


Install yum-utils

for u in kub{1..4}; do ssh $u "yum install -y yum-utils"; done

# Docker ce is havind a conflicting driver with the kubelet for using different cgroup for system

Remove any older docker repos

for u in kub{1..4}; do ssh $u "rm -rf /etc/yum.repos.d/*docker*"; done

Create the Kubernetes Repo file

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
enabled=1
gpgcheck=1
repo_gpgcheck=1
EOF

Copy this repo to all the kubernetes servers inclusive of the nodes

for u in kub{1..4}; do scp -pr kubernetes.repo $u:/etc/yum.repos.d/; done



Setenforce on all the master also nodes


for u in kub{1..4}; do ssh $u "setenforce 0"; done

Install docker kubeadm kubectl on all the servers


for u in kub{1..4}; do ssh $u "yum -y install docker kubelet kubeadm kubernetes-cni"; done

Ensure that this entry is there in /etc/sysctl.d/99-sysctl.conf


for u in kub{1..4}; do ssh $u 'echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.d/99-sysctl.conf
; done

for u in kub{1..4}; do ssh $u 'sysctl -p ;'; done

On all the servers start and enable docker and kubectl

for u in kub{1..4}; do ssh $u "systemctl start docker; systemctl start kubelet; systemctl enable docker; systemctl enable kubelet"; done



As the servers are behind a proxy ensure docker knows about the proxies by putting them in /usr/lib/systemd/system/docker.service

Ensure that the /usr/lib/systemd/system/docker.service has the Environment proxies specified as shown. If the servers are beind a proxy ensure that the proxies are specified as the environment to the docker.service replacing http://proxy.YY.XXXX.net:8080 with the proxy at yours


[root@kub2 ~]# cat /usr/lib/systemd/system/docker.service | grep ^Env
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
Environment=http_proxy=http://proxy.YY.XXXX.net:8080
Environment=https_proxy=http://proxy.YY.XXXX.net:8080
[root@kub2 ~]#

The USER profile also with this (global profile also can be set) to have the proxies in place. Here the internal network for kubernetes master also nodes is 192.168.17.*/24 hence these are put as no_proxy

For the Kubenetes servers being behind a proxy 

export https_proxy=http://proxy.YY.XXXX.net:8080
export no_proxy=127.0.0.1,localhost,example.com,192.168.0.0,192.168.237.0,192.168.17.0,192.168.17.100,192.168.17.101,192.168.17.102,192.168.17.103,192.168.17.104,192.168.17.105,192.168.17.106,192.168.17.107,192.168.17.108,192.168.17.109,192.168.17.110,192.168.17.111,192.168.17.112,192.168.17.113,192.168.17.114,192.168.17.115,192.168.17.116,192.168.17.117,192.168.17.118,192.168.17.119,192.168.17.120,192.168.17.121,192.168.17.122,192.168.17.123,192.168.17.124,192.168.17.125,192.168.17.126,192.168.17.127,192.168.17.128,192.168.17.129,192.168.17.130,192.168.17.131,192.168.17.132,192.168.17.133,192.168.17.134,192.168.17.135,192.168.17.136,192.168.17.137,192.168.17.138,192.168.17.139,192.168.17.140,192.168.17.141,192.168.17.142,192.168.17.143,192.168.17.144,192.168.17.145,192.168.17.146,192.168.17.147,192.168.17.148,192.168.17.149,192.168.17.150,192.168.17.151,192.168.17.152,192.168.17.153,192.168.17.154,192.168.17.155,192.168.17.156,192.168.17.157,192.168.17.158,192.168.17.159,192.168.17.160,192.168.17.161,192.168.17.162,192.168.17.163,192.168.17.164,192.168.17.165,192.168.17.166,192.168.17.167,192.168.17.168,192.168.17.169,192.168.17.170,192.168.17.171,192.168.17.172,192.168.17.173,192.168.17.174,192.168.17.175,192.168.17.176,192.168.17.177,192.168.17.178,192.168.17.179,192.168.17.180,192.168.17.181,192.168.17.182,192.168.17.183,192.168.17.184,192.168.17.185,192.168.17.186,192.168.17.187,192.168.17.188,192.168.17.189,192.168.17.190,192.168.17.191,192.168.17.192,192.168.17.193,192.168.17.194,192.168.17.195,192.168.17.196,192.168.17.197,192.168.17.198,192.168.17.199,192.168.17.200'

---------

Ensure that the docker service is running on all the nodes also the master

for u in kub{1..4}; do ssh $u "systemctl daemon-reload; systemctl start docker"; done

Initialize the master server using kubeadm init
Here the master server has also other networks and it is preferred that the communications of the master also the nodes are limited to 192.168.17.0/24 network

kubeadm init --apiserver-advertise-address=192.168.17.183


kubeadm] WARNING: kubeadm is in beta, please do not use it for production clust                                                   ers.
[init] Using Kubernetes version: v1.6.4
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [server1.netx.exa                                                   mple.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default                                                   .svc.cluster.local] and IPs [10.96.0.1 192.168.17.183]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.                                                   conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 94.145627 seconds
[apiclient] Waiting for at least one node to register
[apiclient] First node has registered after 16.967352 seconds
[token] Using token: 8358dd.27c09cd33a0aea6e
[apiconfig] Created RBAC rules
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  sudo cp /etc/kubernetes/admin.conf $HOME/
  sudo chown $(id -u):$(id -g) $HOME/admin.conf
  export KUBECONFIG=$HOME/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 8358dd.27c09cd33a0aea6e 192.168.17.183:6443

This gives the following information

make a note of thee

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 8358dd.27c09cd33a0aea6e 192.168.17.183:6443




--

Run these the master

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

--

Before joining the machines install the network create the flannel network

--
Network YAML file is  

kube-flannel.yaml

This file has been taken from This file has been taken from 



---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "type": "flannel",
      "delegate": {
        "isDefaultGateway": true
      }
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
        operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      containers:
      - name: kube-flannel
        command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      - name: install-cni
        command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ]
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
--
Create the network as here

kubectl apply -f kube-flannel.yaml

[root@kub1 ~]# kubectl apply -f kube-flannel.yaml
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
[root@kub1 ~]#

---

Join the other nodes to the master

on the servers kub2 kub3 also kub4 run this command

kubeadm join --token 0cb536.5e607bc8dd7254ad 192.168.17.183:6443


[root@kub2 ~]# kubeadm join --token 0cb536.5e607bc8dd7254ad 192.168.17.183:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clust          ers.
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.17.183:6443"
[discovery] Created cluster-info discovery client, requesting info from "https:/          /192.168.17.183:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "          https://192.168.17.183:6443"
[discovery] Successfully established connection with API Server "192.168.17.183:          6443"
[bootstrap] Detected server version: v1.6.4
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta          1)
[csr] Created API client to obtain unique certificate for this node, generating           keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.
[root@kub2 ~]#


--

On the other nodes also join them to the cluster

--

On the master confirm that the other nodes are seen

kubectl get nodes

[root@kub1 ~]# kubectl get nodes
NAME               STATUS    AGE       VERSION
kub1.example.com   Ready     24m       v1.6.4
kub2.example.com   Ready     2m        v1.6.4
kub3.example.com   Ready     1m        v1.6.4
kub4.example.com   Ready     1m        v1.6.4
[root@kub1 ~]#
--
Create the dashboard 

kubectl create -f https://git.io/kube-dashboard

serviceaccount "kubernetes-dashboard" created
clusterrolebinding "kubernetes-dashboard" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
[root@kub1 ~]#

--
For certain Routing issues on CentOS RHEL7 you also have to k8s.conf

cat /etc/sysctl.d/k8s.conf
Should have:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1


create a Sample Application

=====================
kubectl create namespace sock-shop

[root@kub1 ~]# kubectl create namespace sock-shop
namespace "sock-shop" created
[root@kub1 ~]#

--

After the creation of the namspace create the application in the namespace



see the cluster


[root@kub1 ~]# kubectl get all
NAME             CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes   10.96.0.1    <none>        443/TCP   36m

see all the pods in all the namespaces

 [root@kub1 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY     STATUS              RESTARTS   AGE
kube-system   etcd-kub1.example.com                      1/1       Running             0          35m
kube-system   kube-apiserver-kub1.example.com            1/1       Running             0          36m
kube-system   kube-controller-manager-kub1.example.com   1/1       Running             0          36m
kube-system   kube-dns-3913472980-n49q2                  0/3       ContainerCreating   0          37m
kube-system   kube-flannel-ds-6b0mj                      1/2       CrashLoopBackOff    6          14m
kube-system   kube-flannel-ds-919ft                      1/2       CrashLoopBackOff    6          14m
kube-system   kube-flannel-ds-btklx                      1/2       CrashLoopBackOff    7          15m
kube-system   kube-flannel-ds-kjh8l                      1/2       CrashLoopBackOff    8          18m
kube-system   kube-proxy-3tsn5                           1/1       Running             0          37m
kube-system   kube-proxy-7k09c                           1/1       Running             0          14m
kube-system   kube-proxy-d3l34                           1/1       Running             0          14m
kube-system   kube-proxy-v558t                           1/1       Running             0          15m
kube-system   kube-scheduler-kub1.example.com            1/1       Running             0          37m
kube-system   kubernetes-dashboard-2039414953-422x6      0/1       ContainerCreating   0          11m
sock-shop     carts-1207215702-18f6r                     0/1       ContainerCreating   0          2m
sock-shop     carts-db-3114877618-3t8c5                  0/1       ContainerCreating   0          2m
sock-shop     catalogue-4243301285-cj8mf                 0/1       ContainerCreating   0          2m
sock-shop     catalogue-db-4178470543-9m2p6              0/1       ContainerCreating   0          2m
sock-shop     front-end-2352868648-4b58t                 0/1       ContainerCreating   0          2m
sock-shop     orders-4022445995-wtqmn                    0/1       ContainerCreating   0          2m
sock-shop     orders-db-98190230-536mh                   0/1       ContainerCreating   0          2m
sock-shop     payment-830382463-61t1v                    0/1       ContainerCreating   0          2m
sock-shop     queue-master-1447896524-d1vqz              0/1       ContainerCreating   0          2m
sock-shop     rabbitmq-3917772209-9w1g8                  0/1       ContainerCreating   0          2m
sock-shop     shipping-1384426021-4gbz1                  0/1       ContainerCreating   0          2m
sock-shop     user-2524887938-tv24w                      0/1       ContainerCreating   0          2m
sock-shop     user-db-212880535-9xl2m                    0/1       ContainerCreating   0          2m

[root@kub1 ~]#