Kubernetes Master Servers’ configuration
Configuration of the master
nodes kubem{1..3}
# Installation of the
kubernetes binaries on the masters
# optionally if you want to as mentioned earlier PODs also to be scheduled
by Kubernets on the masters, you need to install kubernetes-node binaries on
the master and configure them the same way (additionally) as you would
configure nodes.
Ideally masters should be left alone to perform the Kubernetes masters role
and nodes would be used to allow kubernetes to schedule the PODs on them.
for U in kubem{1..3}
do
ssh $U yum -y install kubernetes-master
kubernetes-client
done
#before these steps you have
to ensure kubectl binaries are installed on the masters. The kubectl actually
comes from the kubernetes-client
The kubelet Kubernetes
Configuration File
When generating kubeconfig files for Kubelets the client certificate matching
the Kubelet's node name must be used. This will ensure Kubelets are properly
authorized by the Kubernetes Node Authorizer.
#Generate a kubeconfig file
for each worker node:
cd /srv/kubernetes
export KUBERNETES_PUBLIC_ADDRESS=172.16.254.201
for instance in kuben{1..3}.sujitnet11.net ; do
kubectl
config set-cluster k8s-api.sujitnet11.net \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${instance}.kubeconfig
kubectl
config set-credentials system:node:${instance} \
--client-certificate=${instance}.pem \
--client-key=${instance}-key.pem \
--embed-certs=true \
--kubeconfig=${instance}.kubeconfig
kubectl
config set-context default \
--cluster=k8s-api.sujitnet11.net \
--user=system:node:${instance} \
--kubeconfig=${instance}.kubeconfig
kubectl
config use-context default --kubeconfig=${instance}.kubeconfig
done
The kube-proxy Kubernetes
Configuration File
Generate a kubeconfig file for the kube-proxy service:
Please note this IP will be the IP that will remain up on the kube-haproxy
server and is managed by haproxy service running on the kube-haproxy server.
export KUBERNETES_PUBLIC_ADDRESS=172.16.254.201
{
kubectl
config set-cluster k8s-api.sujitnet11.net \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl
config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl
config set-context default \
--cluster=k8s-api.sujitnet11.net \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl
config use-context default --kubeconfig=kube-proxy.kubeconfig
}
The kube-controller-manager
Kubernetes Configuration File
Please note that
kube-controller-manager and kube-scheduler has to be started with
--leader-elect flag so as to allow only one of the master have the lease to
modify the etcd
Generate a kubeconfig file
for the kube-controller-manager service:
{
kubectl
config set-cluster k8s-api.sujitnet11.net \
--certificate-authority=ca.pem
\
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl
config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl
config set-context default \
--cluster=k8s-api.sujitnet11.net \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl
config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}
The kube-scheduler
Kubernetes Configuration File
Generate a kubeconfig file
for the kube-scheduler service:
{
kubectl
config set-cluster k8s-api.sujitnet11.net \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl
config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl
config set-context default \
--cluster=k8s-api.sujitnet11.net \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl
config use-context default --kubeconfig=kube-scheduler.kubeconfig
}
The admin Kubernetes
Configuration File
Generate a kubeconfig file
for the admin user:
{
kubectl
config set-cluster k8s-api.sujitnet11.net \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig
kubectl
config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl
config set-context default \
--cluster=k8s-api.sujitnet11.net \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl
config use-context default --kubeconfig=admin.kubeconfig
}
Distribute the Kubernetes
Configuration Files to all masters as well as nodes and also the kube-haproxy
for U in kubem{1..3} kuben{1..3}
kube-haproxy; do scp -pr
/srv/kubernetes/* $U:/srv/kubernetes/; done
The Encryption Key
Generate an encryption key:
ENCRYPTION_KEY=$(head -c 32 /dev/urandom |
base64)
[root@kubem1 kubernetes]# echo $ENCRYPTION_KEY
qxBu0krUJpViWX840LJQkP2ZzGxLFXMnxp5+ooio748=
[root@kubem1 kubernetes]#
The Encryption Config File
Create the
encryption-config.yaml encryption config file:
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
-
resources:
-
secrets
providers:
-
aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
-
identity: {}
EOF
Copy the
encryption-config.yaml encryption config file to each controller instance:
for U in kubem{1..3} kuben{1..3}
kube-haproxy; do scp -pr
/srv/kubernetes/* $U:/srv/kubernetes/; done
Configure the Kubernetes API
Server (on all masters)
cd /srv/kubernetes/
mkdir -p
/var/lib/kubernetes/
cp ca.pem
ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
The kube-apiserver Systemctl
service
The kubernetes API server
will be running as the systemctl service on the master nodes.
The Systemctl configuration
file for the kube-apiserver service is as follows
cat <<EOF | sudo tee
/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API
Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/var/lib/flanneld/subnet.env
ExecStart=/usr/bin/kube-apiserver
\\
--advertise-address=$(hostname -i) \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem
\\
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
\\
--enable-swagger-ui=true \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://172.16.254.221:2379,https://172.16.254.222:2379,https://172.16.254.223:2379
\\
--event-ttl=1h \\
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml
\\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem
\\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-cluster-ip-range=100.65.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
Start the kube-apiserver
service
systemctl daemon-reload
systemctl restart kube-apiserver
systemctl status kube-apiserver
Configure the Kubernetes
Controller Manager
The kube-controller-manager
Kubernetes component also will be running as the systemctl service.
Move the
kube-controller-manager kubeconfig into place:
cd /srv/kubernetes
sudo cp kube-controller-manager.kubeconfig
/var/lib/kubernetes/
Create the
kube-controller-manager.service systemd unit file:
Also the scheduler and the controller manager can be configured to listen
to the API Server on 127.0.0.1 so that the kube-scheduler and the
kube-controller-manager can talk to the API Server on the same node or also to
the load balanced API Server IP
cat <<EOF | sudo tee
/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes
Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/bin/kube-controller-manager
\\
--address=0.0.0.0 \\
--cluster-cidr=24.24.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem
\\
--service-cluster-ip-range=100.65.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
Start the
kube-controller-manager service on the master servers.
systemctl daemon-reload
systemctl restart kube-controller-manager
systemctl status kube-controller-manager
Configure the kube-scheduler
service on the master servers.
Please note that
kube-controller-manager and kube-scheduler has to be started with
--leader-elect flag so as to allow only one of the master have the lease to
modify the etcd.
Also the scheduler and the
controller manager can be configured to listen to the API Server on 127.0.0.1
so that the kube-scheduler and the kube-controller-manager can talk to the API
Server on the same node or also to the load balanced API Server IP .
The kube-scheduler service
will be also running as the systemctl service.
Move the kube-scheduler
kubeconfig into place:
cd /srv/kubernetes/
sudo cp
kube-scheduler.kubeconfig /var/lib/kubernetes/
# Please note that the kubernetes RPM installation creates the file
/etc/kubernetes/config that has to be renamed so that a folder can be created
with the same name.
mv /etc/kubernetes/config
/etc/kubernetes/ORIGconfig
mkdir -p /etc/kubernetes/config
Create the
kube-scheduler.yaml configuration file:
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: componentconfig/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
EOF
Create the
kube-scheduler.service systemd unit file:
cat <<EOF | sudo tee
/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes
Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/bin/kube-scheduler
\\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
Start the kube-scheduler
service on the master servers.
systemctl daemon-reload
systemctl restart kube-scheduler
systemctl status kube-scheduler
Configuration for the health
check
This has to be done on all
the masters
This installs nginx on all the masters. Nginx on the CentOS is available
from the EPEL repository. Hence the EPEL repository is installed and enabled
for the nginx installation and then the EPEL repository is removed once the
nginx installation completes.
yum -y install epel-release
yum clean all
rm -rf /var/cache/yum
yum -y install nginx
yum -y remove epel-release
yum clean all
rm -rf /var/cache/yum
Create the nginx
configuration file
cat >
kubernetes.default.svc.cluster.local.conf <<EOF
server {
listen 80;
server_name kubernetes.default.svc.cluster.local;
location
/healthz {
proxy_pass https://127.0.0.1:6443/healthz;
proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
}
}
EOF
Copy the Configuration file
kubernetes.default.svc.cluster.local.conf
to /etc/nginx/conf.d
{
sudo cp
-pr
kubernetes.default.svc.cluster.local.conf \
/etc/nginx/conf.d/kubernetes.default.svc.cluster.local.conf
}
Start and Enable the nginx
service.
systemctl restart nginx
systemctl enable nginx
systemctl status nginx
Verification of the health
status of the cluster
kubectl get componentstatuses --kubeconfig
admin.kubeconfig
NAME STATUS MESSAGE ERROR
controller-manager Healthy
ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
Test the nginx HTTP health check proxy:
curl -H "Host:
kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Mon, 14 May 2018 13:45:39 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
ok
RBAC for Kubelet
Authorization
This is needed for the successful requests from the kube API server to the
kubelets.
Create the system:kube-apiserver-to-kubelet ClusterRole with permissions to
access the Kubelet API and perform most common tasks associated with managing
pods:
On all the master servers
cd /srv/kubernetes
cat <<EOF | kubectl apply --kubeconfig
admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name:
system:kube-apiserver-to-kubelet
rules:
-
apiGroups:
-
""
resources:
-
nodes/proxy
-
nodes/stats
-
nodes/log
-
nodes/spec
-
nodes/metrics
verbs:
-
"*"
EOF
The Kubernetes API Server
authenticates to the Kubelet as the kubernetes user using the client
certificate as defined by the --kubelet-client-certificate flag.
Bind the
system:kube-apiserver-to-kubelet ClusterRole to the kubernetes user:
cat <<EOF | kubectl apply --kubeconfig
admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name:
system:kube-apiserver
namespace: ""
roleRef:
apiGroup:
rbac.authorization.k8s.io
kind:
ClusterRole
name:
system:kube-apiserver-to-kubelet
subjects:
-
apiGroup: rbac.authorization.k8s.io
kind:
User
name:
kubernetes
EOF
Confirm that the endpoint is
responding to the requests being sent.
export KUBERNETES_PUBLIC_ADDRESS=172.16.254.201
[root@kubem1 kubernetes]# curl --cacert ca.pem Error! Hyperlink reference not valid.
{
"major": "1",
"minor": "10",
"gitVersion": "v1.10.0",
"gitCommit": "fc32d2f3698e36b93322a3465f63a14e9f0eaead",
"gitTreeState": "archive",
"buildDate": "2018-03-29T08:38:42Z",
"goVersion": "go1.9.2",
"compiler": "gc",
"platform": "linux/amd64"
}[root@kubem1 kubernetes]#
Other relevant links for this documentation.
You can click on any of the link to view them.
The Main Document - Kubernetes 1.10.0 with 3 Master and Slave nodes and SSL on CentOS7
KVM Host and Guest Preprations
SSL Certificate Generations
Configure simple external HAPROXY
Configuring ETCD with SSL on the Master servers
Creation of the POD Network information in ETCD for flanneld
Install and Configure the Master Service on the Kubernetes Master servers
Installation and Configuration of the Kubernetes Slaves
Installation and testing of kube-dns
Configure the masters not to run kubernetes scheduled PODs
You can click on any of the link to view them.
The Main Document - Kubernetes 1.10.0 with 3 Master and Slave nodes and SSL on CentOS7
KVM Host and Guest Preprations
SSL Certificate Generations
Configure simple external HAPROXY
Configuring ETCD with SSL on the Master servers
Creation of the POD Network information in ETCD for flanneld
Install and Configure the Master Service on the Kubernetes Master servers
Installation and Configuration of the Kubernetes Slaves
Installation and testing of kube-dns
Configure the masters not to run kubernetes scheduled PODs
As we know there are many companies which are converting into Automated big data engineering. with the right direction we can definitely predict the future.
ReplyDeleteI am searching this type of information. It is very helpful information. Many machine learning solutions provider gives these services.
ReplyDeleteIt was interesting to read this article, thanks!
ReplyDeleteML