Installing on-premise Kubernetes cluster using kubeadm

Motivation and target audience

This blog post is geared towards developers and administrators who want to setup an on-premise Kubernetes cluster. This post will guide you towards the basics of setting up a multi-node kubernetes cluster on linux servers and will address some of the key concerns during the setup.

Kubernetes is a huge area in itself and this post does not intend to cover all the nitty-gritty setup details. Instead, this blog post aims to provide a convenient starting point.

Introduction

Kubernetes (K8s) is a container orchestration technology that focuses on easing the efforts required to build, deploy, scale and manage containerized applications. It also provides means to manage complicated and dynamic life cycle of containerized applications. It was first developed at Google, but later on open sourced as a seed technology to Cloud Native Computing Foundation (CNCF).

From Kubernetes website:

Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

Some of the distinguished kubernetes features are:

You can read more about Kubernetes features here. If you are interested in some case studies you can find them here. In the next section we will start with installing the kubernetes cluster on linux.

Content:

1. Prerequisites

In this example we will set up a kubernetes cluster with one master and 2 nodes.

Hardware and OS specifications.

We will use 3 CentOS 7 servers with minimum 2 CPU and 2 GB RAM. You should have root privileges on the servers to install the required software packages.

Note : The hardware mentioned is to setup basic minimum configuration for kubernetes. Kubernetes comes with lots of bells and whistles and if you are installing all bells and whistles, please refer to this documentation for more details. For this example we have provisioned 3 server with below IPs

It is important to mention here that kubernetes works in accordance with master slave architecture. The pods will never be scheduled on the master, instead master will act as a coordinator to manage different kubernetes services, manage traffic between the pods and manage workload scheduling on the nodes.

Hostname configurations

Login to each of the server via terminal and change the hostname by following below 2 steps. For example on server 192.168.37.48 we have setup the corresponding host name as kubernetes1.oslo.sysco.no

Follow same steps for other two servers as well.

Configure host files

In order for each of our hosts to communicate with others hosts by hostname, we should modify the host file configurations. We will add the below content to /etc/host file on each server.

192.168.37.48 kubernetes1.oslo.sysco.no
192.168.37.49 kubernetes2.oslo.sysco.no
192.168.37.50 kubernetes3.oslo.sysco.no

Note : Its important that you replace above settings with your IP and server alias.

After applying above settings, you should be able to ping other servers from each host. For example, in our case, we can ping kubernetes2.oslo.sysco.no from the host kubernetes1.oslo.sysco.no

[root@kubernetes1 ~]# ping kubernetes2.oslo.sysco.no
PING kubernetes2.oslo.sysco.no (192.168.37.49) 56(84) bytes of data.
64 bytes from kubernetes2.oslo.sysco.no (192.168.37.49): icmp_seq=1 ttl=64 time=0.460 ms
64 bytes from kubernetes2.oslo.sysco.no (192.168.37.49): icmp_seq=2 ttl=64 time=0.229 ms
64 bytes from kubernetes2.oslo.sysco.no (192.168.37.49): icmp_seq=3 ttl=64 time=0.205 ms
64 bytes from kubernetes2.oslo.sysco.no (192.168.37.49): icmp_seq=4 ttl=64 time=0.199 ms
64 bytes from kubernetes2.oslo.sysco.no (192.168.37.49): icmp_seq=5 ttl=64 time=0.262 ms
--- kubernetes2.oslo.sysco.no ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.199/0.271/0.460/0.097 ms

References:

Verify MAC and product_uuid

We need to verifiy that each host in the cluster has unique MAC and product_uuid. Kubernetes uses these values to uniquely identify the nodes in the cluster. If these values are not unique to each node, the installation process may fail.

Note : The product ids are changed in the above example. You should see the similar pattern for your setup. Its important that these value are unique for each node that forms a cluster.

References:

Configure OS level settings

So far we have verified our hardware, now its time to optimize some OS level settings in order to install kubernetes successfully.

2. Install docker, kubernetes and other dependencies.

Now that we have successfully completed Hardware and OS level checks. Its time now to install the required software on our servers. Please go through the instructions below in order to complete software installation.

Install Docker

We will install docker-ce (community version) for this example. You can check for the various available versions of docker here. Also the steps to install docker on different operations systems are described in details here. You need to run the below scripts on each of the servers participating in the kubernetes cluster.

Install Kubernetes components

We will use kubeadm to bootstrap kubernetes cluster for our example. We will start with installing basic kubernetes components namely kubectl, kubeadm and kubelet. We need to run the below scripts on each of our servers.

Initialize master

Login to the server which you have decided to make the master. In our case we want kubernetes1.oslo.sysco.no to act as master so we do an ssh login and run the below command.

kubeadm init --apiserver-advertise-address=<server_ip> --pod-network-cidr=10.244.0.0/16

Replace the server_ip above with the IP of your master. The other supporting cluster services/nodes will connect to this IP while forming the cluster. For example in our case the it looks like

[root@kubernetes1 ~]# kubeadm init --apiserver-advertise-address=192.168.37.48 --pod-network-cidr=10.244.0.0/16

Once the command is successful, you will get a token which other nodes will use to join to the cluster. This will be displayed on your terminal and will look something like

kubeadm join 192.168.37.48:6443 --token 8hp10q.i4ln2b1ogof374aj --discovery-token-ca-cert-hash sha256:6a59c9b03fa971aef94d61a4f4c1a6b085308f88aab4db1a4affda8d65987867

You should carefully save this token. Your Kubernetes master should be initialized successfully by now.

Create kube configuration in home directory

To start using your cluster, you need to run the following as a regular user.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install flannel network

You must deploy a pod network before anything will actually function properly. We will next, deploy the flannel network to the kubernetes cluster using the kubectl command. To check other networking options that can be used with kubernetes, please have a look here.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Check if all the components are deployed properly

[root@kubernetes1 ~]# kubectl get nodes
NAME                        STATUS     ROLES    AGE     VERSION
kubernetes1.oslo.sysco.no   Ready      master   7m18s   v1.13.1

Also check if all required pods are running properly.

[root@kubernetes1 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-kjmxn                            1/1     Running   0          63m
kube-system   coredns-86c58d9df4-lfx4p                            1/1     Running   0          63m
kube-system   etcd-kubernetes1.oslo.sysco.no                      1/1     Running   0          62m
kube-system   kube-apiserver-kubernetes1.oslo.sysco.no            1/1     Running   0          62m
kube-system   kube-controller-manager-kubernetes1.oslo.sysco.no   1/1     Running   0          63m
kube-system   kube-flannel-ds-amd64-mjn4j                         1/1     Running   0          55m
kube-system   kube-proxy-w8rfz                                    1/1     Running   0          55m
kube-system   kube-scheduler-kubernetes1.oslo.sysco.no            1/1     Running   0          62m

Configure correct firewall rules

In order to enable efficient communication between the nodes, we need to configure a few firewall rules on each node.

Run below commands to bridge IP traffic to iptables

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

And apply these settings by running the below command

sysctl --system

You should see your latest k8s.conf being applied in the output

[root@kubernetes1 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...

Once these settings are applied, create rules to allow traffic on few ports that are used by kubernetes.

firewall-cmd --zone=public --add-port=6443/tcp --permanent
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=443/tcp --permanent
firewall-cmd --zone=public --add-port=18080/tcp --permanent
firewall-cmd --zone=public --add-port=10254/tcp --permanent
firewall-cmd --reload

Run above commands on each of the nodes. To get a better understanding of how networking works in kubernetes, please refer to the below links:

Alternate work around : Use the below steps only if you are setting up a demo cluster OR using the cluster in the controlled lab environment. Do not use below alternative in production settings.

In a controlled lab environment, you can skip the above steps by simply disabling the firewalls on each node. To do this login to each server and run below commands in sequence

systemctl stop firewalld
systemctl disable firewalld

You should see and output similar to below on each of the node

[root@kubernetes1 ~]# systemctl stop firewalld
[root@kubernetes1 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

Once again, please note that firewalls should never be disabled on production clusters.

4. Add nodes to the cluster

We have successfully initialized our cluster and in the previous section we have verified that the master node is up and running. It time now to add the nodes kubernetes2.oslo.sysco.no and kubernetes3.oslo.sysco.no to our cluster. In order to do that, ssh to each of the nodes and run the kubeadm join command that we copied earlier.

[root@kubernetes2 ~]#  kubeadm join 192.168.37.48:6443 --token 8hp10q.i4ln2b1ogof374aj --discovery-token-ca-cert-hash sha256:6a59c9b03fa971aef94d61a4f4c1a6b085308f88aab4db1a4affda8d65987867
[root@kubernetes3 ~]#  kubeadm join 192.168.37.48:6443 --token 8hp10q.i4ln2b1ogof374aj --discovery-token-ca-cert-hash sha256:6a59c9b03fa971aef94d61a4f4c1a6b085308f88aab4db1a4affda8d65987867

Wait for some minutes and login to the master node. Run the below command again to verify if the nodes have joined the cluster.

kubectl get nodes
kubectl get pods --all-namespaces

[root@kubernetes1 ~]# kubectl get nodes
NAME                        STATUS   ROLES    AGE     VERSION
kubernetes1.oslo.sysco.no   Ready    master   12m     v1.13.1
kubernetes2.oslo.sysco.no   Ready    <none>   5m37s   v1.13.1
kubernetes3.oslo.sysco.no   Ready    <none>   4m4s    v1.13.1

[root@kubernetes1 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-kjmxn                            1/1     Running   0          63m
kube-system   coredns-86c58d9df4-lfx4p                            1/1     Running   0          63m
kube-system   etcd-kubernetes1.oslo.sysco.no                      1/1     Running   0          62m
kube-system   kube-apiserver-kubernetes1.oslo.sysco.no            1/1     Running   0          62m
kube-system   kube-controller-manager-kubernetes1.oslo.sysco.no   1/1     Running   0          63m
kube-system   kube-flannel-ds-amd64-gm72r                         1/1     Running   0          56m
kube-system   kube-flannel-ds-amd64-mjn4j                         1/1     Running   0          55m
kube-system   kube-flannel-ds-amd64-nhz9b                         1/1     Running   0          58m
kube-system   kube-proxy-2qqjc                                    1/1     Running   0          56m
kube-system   kube-proxy-l69ld                                    1/1     Running   0          63m
kube-system   kube-proxy-w8rfz                                    1/1     Running   0          55m
kube-system   kube-scheduler-kubernetes1.oslo.sysco.no            1/1     Running   0          62m

kubernetes2 and kubernetes3 have been added to the kubernetes cluster.

Note : Please note again that no application will be deployed onto master node kubernetes1.oslo.sysco.no. The master will only be utilized for coordination between nodes.

5. Deploying and Testing services.

Now we have successfully setup our kubernetes cluster, its time to deploy some application and test them.

Note :

In this example, we will create a simple nginx deployment, expose the deployments as service of type=”NodePort”. This will schedule our pods on kubernetes nodes and give them a random port, we will be able to access the services scheduled on the different nodes.

Before starting login to the master node via ssh. We will operate from the master node to deploy and test.

Create deployments

From the master node, run below command to create a standalone nginx deployment on kubernetes:

kubectl create deployment nginx --image=nginx

This will create a single deployment on the cluster. What we want is to have atleast 2 replicas for this deployment so that we can test the deployment on both nodes. In order to do this, we need to change the existing deployment configuration and change replica from 1 to 2. Run the command below and change “spec.replicas” property to 2.

kubectl edit deployment nginx

kubectl will give you a confirmation that deployment configuration is edited. You can also confirm the number of running pods as 2.

[root@kubernetes1 ~]# kubectl edit deployment nginx
deployment.extensions/nginx edited

[root@kubernetes1 ~]# kubectl get pods
NAME                   READY   STATUS    RESTARTS   AGE
nginx-5c7588df-bhgvl   1/1     Running   0          22m
nginx-5c7588df-vvs55   1/1     Running   0          23m

Expose deployement as service

In order to access the nginx pods, we need to expose them as a service. Use the following command to expose nginx deployment as service.

kubectl create service nodeport nginx --tcp=80:80

This will expose our deployments as service available on the 2 nodes. The port will be a random port (default: 30000-32767), and each Node will proxy that port (the same port number on every Node) into your Service.

You can check the status of your service as below.

[root@kubernetes1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        3d22h
nginx        NodePort    10.98.240.53   <none>        80:32701/TCP   21m

Testing the services

The service nginx is now available on both the nodes. From the master node run the below commands to send http request on port 32701.

curl kubernetes2.oslo.sysco.no:32701
curl kubernetes3.oslo.sysco.no:32701

In both the cases you will get a response like below:

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

These service can also be accessed from outside using browser or terminal. To quickly test, we can use the terminal on our local machine and run below command.

prakhar@tardis:~|⇒  curl 192.168.37.49:32701
prakhar@tardis:~|⇒  curl 192.168.37.50:32701

In both the cases, we will get an output similar to above. On accessing the urls via browser, you should see an output like below

nginx-on-k8s

Our kubernetes setup is ready to be used.

6. Accessing kubernetes cluster from local machine

So far we have configured our cluster and we can make deployements via kubectl. But to do that we need to login to our master node. It is actually possible to proxy connections to master from local machine run administer master from localhost. We need to perform below steps in order to enable that configuration.

Install kubectl on local machine

If you are using a debian based local machine like me, you can use the below steps to install kubectl on your local machine.

sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl

If you are using other operating system, the steps to install kubectl for your OS are defined here.

Copy kubernetes configuration from the master

You can manually copy the config file $HOME/.kube/config on your master node and save it locally. If you are using linux based system, you could run the below command to copy the file from master locally.

scp root@192.168.37.48:/etc/kubernetes/admin.conf .

This will copy the admin.conf file from remote master to your current directory. Make sure to replace the IP above with IP of your master node. You would be prompted to enter the password.

Proxy connections to master using kubectl proxy

Now that we have the config file, we can connect to the master by following below steps.

Open a terminal and proxy the kubectl connections to master. Remember to point to the admin.conf file that we have downloded in previous step.

kubectl --kubeconfig ./admin.conf proxy

You should get an output like below

prakhar@tardis:~|⇒  kubectl --kubeconfig ./admin.conf proxy
Starting to serve on 127.0.0.1:8001

Connect to master using admin.conf

Now you can connect to the master node using kubectl like below.

prakhar@tardis:~|⇒  kubectl --kubeconfig ./admin.conf  get pods                           
NAME                   READY   STATUS    RESTARTS   AGE
nginx-5c7588df-bhgvl   1/1     Running   0          45h
nginx-5c7588df-vvs55   1/1     Running   0          46h

prakhar@tardis:~|⇒  kubectl --kubeconfig ./admin.conf  get nodes
NAME                        STATUS   ROLES    AGE     VERSION
kubernetes1.oslo.sysco.no   Ready    master   4d20h   v1.13.1
kubernetes2.oslo.sysco.no   Ready    <none>   4d20h   v1.13.1
kubernetes3.oslo.sysco.no   Ready    <none>   4d20h   v1.13.1

Now you will be able to connect and deploy applications from localhost.

7. Where to go from here

In this blog post we have only touched the surface of setting up kubernetes. As mentioned before, kubernetes provides varities of services which we have not focussed on here. Some of good to have features include load balancing an on-permise kubernetes cluster, security considerations, optimizing cluster for DevOps and many more.

We will cover these concepts in upcoming blogs. For those of you who are interested to read further here are some quick links from kubernetes website that might help:

8. Sources and References.