Configuring the Kubernetes
Nodes
The Kubernets slave nodes are
kuben{1..3}
Install the kubernetes node
binaries on the slaves
for U in kuben{1..3}
do
ssh $U yum -y install kubernetes-node
kubernetes-client
done
If not installed previously
install on the worker servers the following RPMs.
yum -y install socat conntrack ipset
Please note that the
installation of kubernetes-node installs the above RPMs as the dependencies.
Create the following
directories
mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes
Installation and
configuration of flanneld and docker on the nodes
Installation and
Configuration of docker on the kubernetes nodes
Install docker on the Slave nodes. (If later on if you want the containers
also to get scheduled by Kubernetes on the master nodes, which may not be
suggestible in a production environment though, you can also install the docker
binaries on the master servers).
Install docker, bridge-utils
and Enable docker for autostart.
yum -y install docker bridge-utils
systemctl enable docker
If your systems that will be running docker containers from Kubernetes need
a proxy to access the internet to fetch the images from the Docker HUB or
public repositories, you need to configure docker proxies.
Please note that this is needed *only if* you plan to use the internet
repositories and at the same time you have the systems can only access internet
via some HTTP proxy.
In my case the systems had been behind a proxy and so I had to update the
proxy settings for the docker service.
rm -rf /etc/systemd/system/docker.service.d/
mkdir -p /etc/systemd/system/docker.service.d/
cat [Service]
Environment="http_proxy=http://myownproxy.mydomain.net:8080"
Environment="https_proxy=http://myownproxy.mydomain.net:8080"
>
/etc/systemd/system/docker.service.d/override.conf
systemctl daemon-reload
systemctl restart docker
Installation of flanneld
yum -y install flannel
Flanneld as a service
The systemctl configuration
file for flanneld service.
cat << EOF >
/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Network fabric
for containers
Documentation=https://github.com/coreos/flannel
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
Restart=always
RestartSec=5
ExecStart=/usr/bin/flanneld
\\
--etcd-endpoints=https://172.16.254.221:2379,https://172.16.254.222:2379,https://172.16.254.223:2379
\\
--logtostderr=true \\
--ip-masq=true \\
--subnet-dir=/var/lib/flanneld/networks \\
--subnet-file=/var/lib/flanneld/subnet.env \\
--etcd-cafile=/srv/kubernetes/ca.pem \\
--etcd-certfile=/srv/kubernetes/kubernetes.pem \\
--etcd-keyfile=/srv/kubernetes/kubernetes-key.pem \\
--etcd-prefix=/atomic.io/network \\
--etcd-username=admin \\
--ip-masq
[Install]
WantedBy=multi-user.target
Start the flanneld service
systemctl daemon-reload
systemctl restart flanneld
systemctl status flanneld
ip a s
Integration of docker with
flanneld
Docker has to use the subnet
being provided by flannled to be used as the POD network. Hence this
integration is required.
On the nodes ensure that the
docker service has the masquerading is set to false.
sed -i -r -e
"s|^OPTIONS=.*|OPTIONS='--log-driver=journald
--signature-verification=false --iptables=false --ip-masq=false
--ip-forward=true'|g" /etc/sysconfig/docker
Tune the docker systemctl service file so as to use the flanneld file
=-/var/lib/flanneld/subnet.env as the environment file, and take the variables
of flanneld subnets from there and use them, as the docker systemd service
would start on the nodes.
Update the docker service to see the environment /var/lib/flanneld where
flannel keeps the subnet information for the node.
echo
'[Service]
EnvironmentFile=-/var/lib/flanneld/'
> /usr/lib/systemd/system/docker.service.d/flannel.conf
Update the docker systemd
unit file to use the environment files at /var/lib/flanneld/*
echo '[Unit]
Description=Docker
Application Container Engine
Documentation=http://docs.docker.com
After=network.target
rhel-push-plugin.socket registries.service After=network-online.target
docker.socket flanneld.service
Wants=network-online.target
docker-storage-setup.service
Requires=docker-cleanup.timer
[Service]
Type=notify
NotifyAccess=all
EnvironmentFile=-/run/containers/registries.conf
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
EnvironmentFile=-/var/lib/flanneld/subnet.env
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecStart=/usr/bin/dockerd-current
\
--add-runtime
docker-runc=/usr/libexec/docker/docker-runc-current \
--default-runtime=docker-runc \
--exec-opt
native.cgroupdriver=cgroupfs \
--userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
--init-path=/usr/libexec/docker/docker-init-current \
--seccomp-profile=/etc/docker/seccomp.json
\
--bip=${FLANNEL_SUBNET}
--mtu=${FLANNEL_MTU} \
$OPTIONS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$ADD_REGISTRY \
$BLOCK_REGISTRY \
$INSECURE_REGISTRY \
$REGISTRIES
ExecReload=/bin/kill -s HUP
$MAINPID
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
Restart=on-abnormal
KillMode=process
[Install]
WantedBy=multi-user.target'
> /usr/lib/systemd/system/docker.service
Restart the docker service.
systemctl daemon-reload
systemctl restart docker
systemctl status docker
ip a s
Tune the docker service so
as to use cgroupfs as the cgrpoup driver for docker.
Set the cgroup driver to be
cgroupfs. This is in accrodance with the kubelet service requriment that
expects on CentOS by default the cgroupdriver to be cgroupfs instead of
systemd.
sed -i -r -e 's|--exec-opt
native.cgroupdriver=systemd|--exec-opt native.cgroupdriver=cgroupfs|g'
/usr/lib/systemd/system/docker.service
cat /usr/lib/systemd/system/docker.service | grep
exec-opt
Restart the docker service.
systemctl daemon-reload
systemctl restart docker
systemctl status docker
Make a manifests directory
which will be used as the manifests directory by the kubelet
mkdir -p /etc/kubernetes/manifests
Configure the Kubelet
POD_CIDR=24.24.0.0/16
cd /srv/kubernetes
{
sudo cp
$(hostname -f)-key.pem $(hostname -f).pem /var/lib/kubelet/
sudo cp
$(hostname -f).kubeconfig /var/lib/kubelet/kubeconfig
sudo cp
ca.pem /var/lib/kubernetes/
}
Create the
kubelet-config.yaml configuration file:
POD_CIDR=24.24.0.0/16
cat <<EOF | sudo tee
/var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode:
Webhook
clusterDomain: "cluster.local"
clusterDNS:
-
"100.65.0.10"
podCIDR: "${POD_CIDR}"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/$(hostname
-f).pem"
tlsPrivateKeyFile:
"/var/lib/kubelet/$(hostname -f)-key.pem"
EOF
Create the kubelet.service
systemd unit file:
cat <<EOF | sudo tee
/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
#This is with CNI
#ExecStart=/usr/bin/kubelet
--config=/var/lib/kubelet/kubelet-config.yaml --kubeconfig=/var/lib/kubelet/kubeconfig
--pod-manifest-path=/etc/kubernetes/manifests --image-pull-progress-deadline=2m
--allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d
--cni-bin-dir=/usr/libexec/cni --cluster-dns=100.65.0.10 --cluster-domain=cluster.local
--authorization-mode=Webhook --client-ca-file=/srv/kubernetes/ca.pem
--cadvisor-port=0 --cgroup-driver=cgroupfs --register-node=true --v=2
#This is without CNI
ExecStart=/usr/bin/kubelet
--config=/var/lib/kubelet/kubelet-config.yaml --kubeconfig=/var/lib/kubelet/kubeconfig
--pod-manifest-path=/etc/kubernetes/manifests
--image-pull-progress-deadline=2m
--allow-privileged=true --cluster-dns=100.65.0.10 --cluster-domain=cluster.local
--authorization-mode=Webhook --client-ca-file=/srv/kubernetes/ca.pem
--cadvisor-port=0 --cgroup-driver=cgroupfs --register-node=true --v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
Restart the kubelet service
on all the nodes
systemctl daemon-reload
systemctl restart kubelet
systemctl status kubelet
systemctl daemon-reload
systemctl restart kubelet
systemctl status kubelet
Configure the Kubernetes
Proxy
kube-proxy
is also going to be run as systemd service on the kubernetes nodes.
cd /srv/kubernetes/
sudo cp kube-proxy.kubeconfig
/var/lib/kube-proxy/kubeconfig
Create the
kube-proxy-config.yaml configuration file:
cat <<EOF | sudo tee
/var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "24.24.0.0/16"
EOF
Create the
kube-proxy.service systemd unit file:
cat <<EOF | sudo tee
/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
Start the kube-proxy service
and check the status of the same
systemctl daemon-reload
systemctl restart kube-proxy
systemctl restart kube-proxy
systemctl status kube-proxy
The Admin Kubernetes
Configuration File
Each kubeconfig requires a Kubernetes API Server to connect to. To support
high availability the IP address assigned to the external load balancer
fronting the Kubernetes API Servers will be used.
Generate a kubeconfig file
suitable for authenticating as the admin user:
cd /srv/kubernetes
KUBERNETES_PUBLIC_ADDRESS=172.16.254.201
{
kubectl
config set-cluster k8s-api.sujitnet11.net \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
kubectl
config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem
kubectl
config set-context k8s-api.sujitnet11.net \
--cluster=k8s-api.sujitnet11.net \
--user=admin
kubectl
config use-context k8s-api.sujitnet11.net
}
Get the component status
kubectl get componentstatus
[root@kubem1 kubernetes]# kubectl get
componentstatus
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy
ok
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
[root@kubem1 kubernetes]#
Other relevant links for this documentation.
You can click on any of the link to view them.
The Main Document - Kubernetes 1.10.0 with 3 Master and Slave nodes and SSL on CentOS7
KVM Host and Guest Preprations
SSL Certificate Generations
Configure simple external HAPROXY
Configuring ETCD with SSL on the Master servers
Creation of the POD Network information in ETCD for flanneld
Install and Configure the Master Service on the Kubernetes Master servers
Installation and Configuration of the Kubernetes Slaves
Installation and testing of kube-dns
Configure the masters not to run kubernetes scheduled PODs
You can click on any of the link to view them.
The Main Document - Kubernetes 1.10.0 with 3 Master and Slave nodes and SSL on CentOS7
KVM Host and Guest Preprations
SSL Certificate Generations
Configure simple external HAPROXY
Configuring ETCD with SSL on the Master servers
Creation of the POD Network information in ETCD for flanneld
Install and Configure the Master Service on the Kubernetes Master servers
Installation and Configuration of the Kubernetes Slaves
Installation and testing of kube-dns
Configure the masters not to run kubernetes scheduled PODs
No comments:
Post a Comment