Configuration of ETCD with SSL on the Master servers
Configuring the ETCD
ETCD being used as the key Value store here will be run as a systemctl
service on all the masters. The section below shows the installation and
configuration needed to have the ETCD in a clustered mode also using the SSL
certificates.
Our Kubernetes master nodes are where the etcd services are going to run.
These have to be performed on all the masters.
yum -y install etcd
Configure the systemctl file
for the etcd service on all the masters.
cat <<EOF | sudo tee
/usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/bin/etcd \\
--name
$(hostname -s) \\
--cert-file=/srv/kubernetes/kubernetes.pem \\
--key-file=/srv/kubernetes/kubernetes-key.pem \\
--peer-cert-file=/srv/kubernetes/kubernetes.pem \\
--peer-key-file=/srv/kubernetes/kubernetes-key.pem \\
--trusted-ca-file=/srv/kubernetes/ca.pem \\
--peer-trusted-ca-file=/srv/kubernetes/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://$(hostname -i):2380 \\
--listen-peer-urls https://$(hostname -i):2380 \\
--listen-client-urls https://$(hostname
-i):2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://$(hostname -i):2379 \\
--initial-cluster-token etcd-cluster-ssl \\
--initial-cluster
kubem1=https://172.16.254.221:2380,kubem2=https://172.16.254.222:2380,kubem3=https://172.16.254.223:2380
\\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
Confirm that the systemctl
unit file looks good on the masters
cat /usr/lib/systemd/system/etcd.service
Start and confirm ETCD
service is up on the masters.
systemctl daemon-reload
systemctl restart etcd
systemctl status etcd
Check the etcd cluster and
the members are fine.
Export the ETCDCTL_ENDPOINTS
environment variable.
export ETCDCTL_ENDPOINTS=https://172.16.254.221:2379,https://172.16.254.222:2379,https://172.16.254.223:2379
See the cluster member list.
etcdctl
--cert-file=/srv/kubernetes/kubernetes.pem
--key-file=/srv/kubernetes/kubernetes-key.pem --ca-file=/srv/kubernetes/ca.pem
member list
You must be able to see something like this
50de99a6cc8a960e: name=kubem1
peerURLs=https://172.16.254.221:2380 clientURLs=https://172.16.254.221:2379
isLeader=true
c429a41cbc578b2a: name=kubem2
peerURLs=https://172.16.254.222:2380 clientURLs=https://172.16.254.222:2379
isLeader=false
e536884b29f55799: name=kubem3
peerURLs=https://172.16.254.223:2380 clientURLs=https://172.16.254.223:2379
isLeader=false
Add the ETCDCTL_ENDPOINTS in
the /etc/profile of the servers.
echo 'export
ETCDCTL_ENDPOINTS=https://172.16.254.221:2379,https://172.16.254.222:2379,https://172.16.254.223:2379'
>> /etc/profile
If you have edited the
/etc/profile of one of the servers (master) then copy it over to the other
masters. In my case the masters are identically configured so copying over the
/etc/profile to all the servers is issueless.
for U in kube1{1..3} kuben{1..3} kube-haproxy
scp -pr /etc/profile $U:/etc/
done
Create the POD Network Information in ETCD Store to be used by Kubernetes
Create the POD network
information in the ETCD to be used by flannleld.
etcdctl
--cert-file=/srv/kubernetes/kubernetes.pem --key-file=/srv/kubernetes/kubernetes-key.pem
--ca-file=/srv/kubernetes/ca.pem set /atomic.io/network/config '{
"Network": "24.24.0.0/16", "SubnetLen": 24,
"Backend": {"Type": "vxlan"} }'
Ensure that the information
has been put in the ETCD
etcdctl --cert-file=/srv/kubernetes/kubernetes.pem
--key-file=/srv/kubernetes/kubernetes-key.pem
--ca-file=/srv/kubernetes/ca.pem get
/atomic.io/network/config
Configuration of Flannel
Please note that we will be running flanneld as a systemctl service and not
as a POD. Also as flanneld will be used to provide the overlay network on the
kubernetes slaves (we will run flanneld only on the slaves). Please note if you
plan to use the master servers also to be able to schedule the kubernetes
containers and PODs the flanneld will also be needed to be running on masters.
In this case masters would run master roles for kubernetes as well as nodes.
To be able to have masters to work in addition to the master node role as
slave (node) roles, the configuration of the nodes is also to be done on the
masters.
For now to keep things logically simple we will have masters just run as
masters.
Other relevant links for this documentation.
You can click on any of the link to view them.
The Main Document - Kubernetes 1.10.0 with 3 Master and Slave nodes and SSL on CentOS7
KVM Host and Guest Preprations
SSL Certificate Generations
Configure simple external HAPROXY
Configuring ETCD with SSL on the Master servers
Creation of the POD Network information in ETCD for flanneld
Install and Configure the Master Service on the Kubernetes Master servers
Installation and Configuration of the Kubernetes Slaves
Installation and testing of kube-dns
Configure the masters not to run kubernetes scheduled PODs
You can click on any of the link to view them.
The Main Document - Kubernetes 1.10.0 with 3 Master and Slave nodes and SSL on CentOS7
KVM Host and Guest Preprations
SSL Certificate Generations
Configure simple external HAPROXY
Configuring ETCD with SSL on the Master servers
Creation of the POD Network information in ETCD for flanneld
Install and Configure the Master Service on the Kubernetes Master servers
Installation and Configuration of the Kubernetes Slaves
Installation and testing of kube-dns
Configure the masters not to run kubernetes scheduled PODs
No comments:
Post a Comment