Basic install and
configuration of Kubernetes 1.6.4 on CentOS7
See the latest Guide at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
The nodes for the setup
kub1 is the master node and kub{2..4} are the nodes
These servers are
behind a proxy
192.168.17.183 kub1.example.com kub1
192.168.17.186 kub4.example.com kub4
192.168.17.184 kub3.example.com kub3
192.168.17.189 kub2.example.com kub2
192.168.17.190 ansible.example.com ansible
These are installed
with
uname -a
Linux
ansible.example.com 3.10.0-514.21.1.el7.x86_64 #1 SMP Thu May 25 17:04:51 UTC
2017 x86_64 x86_64 x86_64 GNU/Linux
RedHat release
cat
/etc/redhat-release
CentOS Linux release
7.3.1611 (Core)
Clean up of any
previous installation of kubernetes also docker if any
for u in kub{1..4}; do
ssh $u "yum -y remove docker-client docker-common docker docker-ce
docker-ce-selinux docker-ce-* container-selinux* etcd flannel kubernetes kubernetes*
kuebeadm kubectl container-cni etcd flannel"; done
Delete any left overs
from the previous installation of kubernetes and etcd
for u in kub{1..4}; do
ssh $u "rm -rf /var/lib/etcd/*"; done
for u in kub{1..4}; do
ssh $u "rm -rf /var/lib/kubelet/*"; done
Install yum-utils
for u in kub{1..4}; do
ssh $u "yum install -y yum-utils"; done
# Docker ce is havind a
conflicting driver with the kubelet for using different cgroup for system
Remove any older docker
repos
for u in kub{1..4}; do
ssh $u "rm -rf /etc/yum.repos.d/*docker*"; done
Create the Kubernetes
Repo file
cat <<EOF >
/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
enabled=1
gpgcheck=1
repo_gpgcheck=1
EOF
Copy this repo to all
the kubernetes servers inclusive of the nodes
for u in kub{1..4}; do
scp -pr kubernetes.repo $u:/etc/yum.repos.d/; done
Setenforce on all the
master also nodes
for u in kub{1..4}; do
ssh $u "setenforce 0"; done
Install docker kubeadm
kubectl on all the servers
for u in kub{1..4}; do
ssh $u "yum -y install docker kubelet kubeadm kubernetes-cni"; done
Ensure that this entry
is there in /etc/sysctl.d/99-sysctl.conf
for u in kub{1..4}; do
ssh $u 'echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.d/99-sysctl.conf
; done
for u in kub{1..4}; do
ssh $u 'sysctl -p ;'; done
On all the servers
start and enable docker and kubectl
for u in kub{1..4}; do
ssh $u "systemctl start docker; systemctl start kubelet; systemctl enable
docker; systemctl enable kubelet"; done
As the servers are
behind a proxy ensure docker knows about the proxies by putting them in
/usr/lib/systemd/system/docker.service
Ensure that the
/usr/lib/systemd/system/docker.service has the Environment proxies specified as
shown. If the servers are beind a proxy ensure that the proxies are specified
as the environment to the docker.service replacing http://proxy.YY.XXXX.net:8080 with
the proxy at yours
[root@kub2 ~]# cat
/usr/lib/systemd/system/docker.service | grep ^Env
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
Environment=http_proxy=http://proxy.YY.XXXX.net:8080
Environment=https_proxy=http://proxy.YY.XXXX.net:8080
[root@kub2 ~]#
The USER profile also
with this (global profile also can be set) to have the proxies in place. Here
the internal network for kubernetes master also nodes is 192.168.17.*/24 hence
these are put as no_proxy
For the Kubenetes
servers being behind a proxy
export http_proxy=http://proxy.YY.XXXX.net:8080
export https_proxy=http://proxy.YY.XXXX.net:8080
export
no_proxy=127.0.0.1,localhost,example.com,192.168.0.0,192.168.237.0,192.168.17.0,192.168.17.100,192.168.17.101,192.168.17.102,192.168.17.103,192.168.17.104,192.168.17.105,192.168.17.106,192.168.17.107,192.168.17.108,192.168.17.109,192.168.17.110,192.168.17.111,192.168.17.112,192.168.17.113,192.168.17.114,192.168.17.115,192.168.17.116,192.168.17.117,192.168.17.118,192.168.17.119,192.168.17.120,192.168.17.121,192.168.17.122,192.168.17.123,192.168.17.124,192.168.17.125,192.168.17.126,192.168.17.127,192.168.17.128,192.168.17.129,192.168.17.130,192.168.17.131,192.168.17.132,192.168.17.133,192.168.17.134,192.168.17.135,192.168.17.136,192.168.17.137,192.168.17.138,192.168.17.139,192.168.17.140,192.168.17.141,192.168.17.142,192.168.17.143,192.168.17.144,192.168.17.145,192.168.17.146,192.168.17.147,192.168.17.148,192.168.17.149,192.168.17.150,192.168.17.151,192.168.17.152,192.168.17.153,192.168.17.154,192.168.17.155,192.168.17.156,192.168.17.157,192.168.17.158,192.168.17.159,192.168.17.160,192.168.17.161,192.168.17.162,192.168.17.163,192.168.17.164,192.168.17.165,192.168.17.166,192.168.17.167,192.168.17.168,192.168.17.169,192.168.17.170,192.168.17.171,192.168.17.172,192.168.17.173,192.168.17.174,192.168.17.175,192.168.17.176,192.168.17.177,192.168.17.178,192.168.17.179,192.168.17.180,192.168.17.181,192.168.17.182,192.168.17.183,192.168.17.184,192.168.17.185,192.168.17.186,192.168.17.187,192.168.17.188,192.168.17.189,192.168.17.190,192.168.17.191,192.168.17.192,192.168.17.193,192.168.17.194,192.168.17.195,192.168.17.196,192.168.17.197,192.168.17.198,192.168.17.199,192.168.17.200'
---------
Ensure that the docker
service is running on all the nodes also the master
for u in kub{1..4}; do
ssh $u "systemctl daemon-reload; systemctl start docker"; done
Initialize the master
server using kubeadm init
Here the master server
has also other networks and it is preferred that the communications of the
master also the nodes are limited to 192.168.17.0/24 network
kubeadm init
--apiserver-advertise-address=192.168.17.183
kubeadm] WARNING:
kubeadm is in beta, please do not use it for production clust
ers.
[init] Using
Kubernetes version: v1.6.4
[init] Using
Authorization mode: RBAC
[preflight] Running
pre-flight checks
[certificates]
Generated CA certificate and key.
[certificates]
Generated API server certificate and key.
[certificates] API
Server serving cert is signed for DNS names [server1.netx.exa
mple.com kubernetes kubernetes.default kubernetes.default.svc
kubernetes.default
.svc.cluster.local] and IPs [10.96.0.1 192.168.17.183]
[certificates]
Generated API server kubelet client certificate and key.
[certificates]
Generated service account token signing key and public key.
[certificates]
Generated front-proxy CA certificate and key.
[certificates]
Generated front-proxy client certificate and key.
[certificates] Valid
certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote
KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote
KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote
KubeConfig file to disk: "/etc/kubernetes/controller-manager.
conf"
[kubeconfig] Wrote
KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created
API client, waiting for the control plane to become ready
[apiclient] All
control plane components are healthy after 94.145627 seconds
[apiclient] Waiting
for at least one node to register
[apiclient] First node
has registered after 16.967352 seconds
[token] Using token:
8358dd.27c09cd33a0aea6e
[apiconfig] Created
RBAC rules
[addons] Created
essential addon: kube-proxy
[addons] Created
essential addon: kube-dns
Your Kubernetes master
has initialized successfully!
To start using your
cluster, you need to run (as a regular user):
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
You should now deploy
a pod network to the cluster.
Run "kubectl
apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any
number of machines by running the following on each node
as root:
kubeadm join --token
8358dd.27c09cd33a0aea6e 192.168.17.183:6443
This gives the
following information
make a note of thee
You should now deploy
a pod network to the cluster.
Run "kubectl
apply -f [podnetwork].yaml" with one of the options listed at:
You can now join any
number of machines by running the following on each node
as root:
kubeadm join --token
8358dd.27c09cd33a0aea6e 192.168.17.183:6443
--
Run these the master
sudo cp
/etc/kubernetes/admin.conf $HOME/
sudo chown $(id
-u):$(id -g) $HOME/admin.conf
export
KUBECONFIG=$HOME/admin.conf
--
Before joining the
machines install the network create the flannel network
--
Network YAML file is
kube-flannel.yaml
This file has been
taken from This file has been taken from
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace:
kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name:
kube-flannel-cfg
namespace:
kube-system
labels:
tier:
node
app:
flannel
data:
cni-conf.json:
|
{
"name": "cbr0",
"type": "flannel",
"delegate": {
"isDefaultGateway": true
}
}
net-conf.json:
|
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion:
extensions/v1beta1
kind: DaemonSet
metadata:
name:
kube-flannel-ds
namespace:
kube-system
labels:
tier:
node
app:
flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
operator: Exists
effect: NoSchedule
serviceAccountName: flannel
containers:
-
name: kube-flannel
command: [ "/opt/bin/flanneld", "--ip-masq",
"--kube-subnet-mgr" ]
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
-
name: install-cni
command: [ "/bin/sh", "-c", "set -e -x; cp -f
/etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do
sleep 3600; done" ]
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
--
Create the network as
here
kubectl apply -f
kube-flannel.yaml
[root@kub1 ~]# kubectl
apply -f kube-flannel.yaml
serviceaccount
"flannel" created
configmap
"kube-flannel-cfg" created
daemonset
"kube-flannel-ds" created
[root@kub1 ~]#
---
Join the other nodes to
the master
on the servers kub2
kub3 also kub4 run this command
kubeadm join --token
0cb536.5e607bc8dd7254ad 192.168.17.183:6443
[root@kub2 ~]# kubeadm
join --token 0cb536.5e607bc8dd7254ad 192.168.17.183:6443
[kubeadm] WARNING:
kubeadm is in beta, please do not use it for production clust
ers.
[preflight] Running
pre-flight checks
[discovery] Trying to
connect to API Server "192.168.17.183:6443"
[discovery] Created
cluster-info discovery client, requesting info from "https:/
/192.168.17.183:6443"
[discovery] Cluster
info signature and contents are valid, will use API Server "
https://192.168.17.183:6443"
[discovery]
Successfully established connection with API Server "192.168.17.183:
6443"
[bootstrap] Detected
server version: v1.6.4
[bootstrap] The server
supports the Certificates API (certificates.k8s.io/v1beta
1)
[csr] Created API
client to obtain unique certificate for this node, generating
keys and certificate signing request
[csr] Received signed
certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote
KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing
request sent to master and response
received.
* Kubelet informed of
new secure connection details.
Run 'kubectl get
nodes' on the master to see this machine join.
[root@kub2 ~]#
--
On the other nodes
also join them to the cluster
--
On the master confirm
that the other nodes are seen
kubectl get nodes
[root@kub1 ~]# kubectl
get nodes
NAME
STATUS AGE
VERSION
kub1.example.com
Ready 24m v1.6.4
kub2.example.com
Ready 2m v1.6.4
kub3.example.com
Ready 1m v1.6.4
kub4.example.com
Ready 1m v1.6.4
[root@kub1 ~]#
--
Create the
dashboard
kubectl create -f https://git.io/kube-dashboard
serviceaccount
"kubernetes-dashboard" created
clusterrolebinding
"kubernetes-dashboard" created
deployment
"kubernetes-dashboard" created
service
"kubernetes-dashboard" created
[root@kub1 ~]#
--
For certain Routing
issues on CentOS RHEL7 you also have to k8s.conf
cat
/etc/sysctl.d/k8s.conf
Should have:
net.bridge.bridge-nf-call-ip6tables
= 1
net.bridge.bridge-nf-call-iptables
= 1
create a Sample
Application
=====================
kubectl create
namespace sock-shop
[root@kub1 ~]# kubectl
create namespace sock-shop
namespace
"sock-shop" created
[root@kub1 ~]#
--
After the creation of
the namspace create the application in the namespace
kubectl apply -n
sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true"
see the cluster
[root@kub1 ~]# kubectl
get all
NAME
CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
svc/kubernetes
10.96.0.1 <none> 443/TCP
36m
see all the pods in all
the namespaces
[root@kub1 ~]# kubectl get pods
--all-namespaces
NAMESPACE
NAME
READY
STATUS RESTARTS
AGE
kube-system etcd-kub1.example.com
1/1
Running 0
35m
kube-system kube-apiserver-kub1.example.com
1/1 Running
0 36m
kube-system kube-controller-manager-kub1.example.com
1/1 Running 0
36m
kube-system
kube-dns-3913472980-n49q2
0/3 ContainerCreating 0
37m
kube-system
kube-flannel-ds-6b0mj
1/2 CrashLoopBackOff 6
14m
kube-system
kube-flannel-ds-919ft
1/2 CrashLoopBackOff 6
14m
kube-system
kube-flannel-ds-btklx
1/2 CrashLoopBackOff 7
15m
kube-system
kube-flannel-ds-kjh8l
1/2 CrashLoopBackOff 8
18m
kube-system
kube-proxy-3tsn5
1/1 Running
0 37m
kube-system
kube-proxy-7k09c
1/1 Running
0 14m
kube-system
kube-proxy-d3l34
1/1 Running
0 14m
kube-system
kube-proxy-v558t
1/1 Running
0 15m
kube-system kube-scheduler-kub1.example.com
1/1 Running
0 37m
kube-system
kubernetes-dashboard-2039414953-422x6 0/1
ContainerCreating 0 11m
sock-shop
carts-1207215702-18f6r
0/1 ContainerCreating 0
2m
sock-shop
carts-db-3114877618-3t8c5
0/1 ContainerCreating 0
2m
sock-shop
catalogue-4243301285-cj8mf
0/1 ContainerCreating 0
2m
sock-shop
catalogue-db-4178470543-9m2p6
0/1 ContainerCreating 0
2m
sock-shop
front-end-2352868648-4b58t
0/1 ContainerCreating 0
2m
sock-shop
orders-4022445995-wtqmn
0/1
ContainerCreating 0 2m
sock-shop
orders-db-98190230-536mh
0/1 ContainerCreating 0
2m
sock-shop
payment-830382463-61t1v
0/1 ContainerCreating 0
2m
sock-shop
queue-master-1447896524-d1vqz
0/1 ContainerCreating 0
2m
sock-shop
rabbitmq-3917772209-9w1g8
0/1 ContainerCreating 0
2m
sock-shop
shipping-1384426021-4gbz1
0/1 ContainerCreating 0
2m
sock-shop
user-2524887938-tv24w
0/1
ContainerCreating 0
2m
sock-shop
user-db-212880535-9xl2m
0/1 ContainerCreating 0
2m
[root@kub1 ~]#
Thank you for sharing valuable information.This article is very useful for me valuable info about interview ques and ans
ReplyDeleteDevOps Training in Bangalore | Certification | Online Training Course institute | DevOps Training in Hyderabad | Certification | Online Training Course institute | DevOps Training in Coimbatore | Certification | Online Training Course institute | DevOps Online Training | Certification | Devops Training Online
Thanks for one marvelous posting! I enjoyed reading it; you are a great author. I will make sure to bookmark your blog and may come back someday. I want to encourage that you continue your great posts. This idea is mind blowing. I think everyone should know such information like you have described on this post. Thank you for sharing this explanation.
ReplyDeleteoracle training in chennai
oracle training institute in chennai
oracle training in bangalore
oracle training in hyderabad
oracle training
oracle online training
hadoop training in chennai
hadoop training in bangalore