Kubernetes On-Prem Demo Using GNS3, Kubespray, Nginx Ingress, FRR And Metallb — Part 1

Christopher Adigun
11 min readDec 26, 2018

--

I started my IT career as a network engineer around 2003, back then I used GNS3 heavily to emulate both Cisco and Juniper devices but as the years rolled on I eventually switched to been a Linux/Infrastructure engineer, hence network engineering was no longer my main core area but I never stopped following, from time to time I still had to troubleshoot some IP/Switching/OSPF issues.

For about 2 years now my focus is now on Devops tool chain i.e. kubernetes, CI/CD, service mesh etc..

So this blog post is just to demonstrate how a kubernetes cluster can be deployed in an on-prem environment. I could have used Virtual-box/KVM entirely but the prospect of using GNS3 makes it very interesting, it makes creating a topology very easy (not to mention the support for KVM via qemu all within the same interface).

The scenario is given below:

For the Master and Worker nodes, I suggest you use at least 3 GB RAM, give more RAM to the Worker nodes (I used 5GB each).

Open-source projects that will be used is given below:

GNS3 — A full emulation/simulation system

Kubespray — A production grade kubernetes deployment solution that uses Ansible (https://github.com/kubernetes-sigs/kubespray)

FRR — Open source router (a fork of Quagga). The BGP aspect will be used. The GNS3 appliance was used for this.

Metallb — A kubernetes Load Balancer service type solution for on-prem environments (https://metallb.universe.tf/)

Ubuntu 18 Cloud image — This is the base OS that will be used to install both the kubernetes Master nodes and workers. The GNS3 appliance was used for this.

N.B — The default Ubuntu server GNS3 appliance default hard disk size is not sufficient for our demo, I had to increase the size with the command:

qemu-img resize ubuntu-18.04-server-cloudimg-amd64.img +28G

Nginx Ingress Controller — A layer 7 application proxy, the importance of this is to reduce the number of load balancers that you might need to expose your services, so instead of creating multiple load balancers, you can create one ingress controller with a Load Balancer service type, then expose your services using ingress.

Firefox Image — This appliance is used to test. The GNS3 appliance was used for this.

STEP ONE — Configure the Ubuntu 18 VMs with the proper IP addresses (static) and Hostname

Example of setting up the master (same steps can be used for the worker nodes):

hostnamectl set-hostname k8s-master-1

To set static IP, we can use netplan, first of all edit /etc/netplan/50-cloud-init.yaml:

network:
version: 2
ethernets:
ens3:
dhcp4: no
addresses: [10.20.20.3/24]
gateway4: 10.20.20.1
nameservers:
addresses: [8.8.8.8,8.8.4.4]
match:
macaddress: 0c:82:b4:f3:96:00
set-name: ens3

Then apply using: sudo netplan apply

Check the IP using: ip address

Tip — To minimize start-up on the Ubuntu 18 image, I noticed it normally waits for some time while booting with the following message “A start job is running for wait for network to be configured. Ubuntu server 17.10

Ask Question

You can avoid this by disabling and masking the systemd-networkd-wait-online daemon:

systemctl disable systemd-networkd-wait-online.servicesystemctl mask systemd-networkd-wait-online.service

Further info can be seen in the link included above.

STEP TWO — Prepare The Deployment Node (this is where we will initiate the kubernetes installation).

The deployment node is responsible for installing the kubernetes cluster, it will not be part of the cluster. Following steps can be followed:

Install necessary software utilities:

sudo apt-get update

sudo apt-get install git python python-pip -y

Generate SSH key pair that is needed for passwordless login:

ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ubuntu/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/ubuntu/.ssh/id_rsa.
Your public key has been saved in /home/ubuntu/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:hOiYjTtDlc5+FKeRwPKfnYArbnhruLU+fAtS9wHt3fI ubuntu@oam-deploy-host
The key's randomart image is:
+---[RSA 2048]----+
| .. |
| . .= o |
| o=.* o |
| Xoo.B . |
| * *o=+S.. |
| o.+.oo.oo |
|.=*.. o E |
|ooB=.. |
|.*++.. |
+----[SHA256]-----+

STEP THREE — Prepare the kubernetes Nodes

Python is needed to be installed on the kubernetes nodes as Ansible depends on it:

sudo apt-get update

sudo apt-get install -y python

Also generate SSH key pair that is needed for passwordless login:

ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ubuntu/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/ubuntu/.ssh/id_rsa.
Your public key has been saved in /home/ubuntu/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:hOiYjTtDlc5+FKeRwPKfnYArbnhruLU+fAtS9wHt3fI ubuntu@oam-deploy-host
The key's randomart image is:
+---[RSA 2048]----+
| .. |
| . .= o |
| o=.* o |
| Xoo.B . |
| * *o=+S.. |
| o.+.oo.oo |
|.=*.. o E |
|ooB=.. |
|.*++.. |
+----[SHA256]-----+

STEP FOUR — Kubernetes Installation

In this step we need to clone the Kubespray repo, exchange SSH keys (for password-less login to the kubernetes nodes), create a host file that will be used by Kubespray (host file will contain the details of the nodes that we want to use i.e. Master, ETCD and Worker). All these will be done on the deployment Node.

Exchange SSH key pair with the kubernetes nodes (default password is Ubuntu):

ubuntu@oam-deploy-host:~$ ssh-copy-id 10.20.20.3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ubuntu/.ssh/id_rsa.pub"
The authenticity of host '10.20.20.3 (10.20.20.3)' can't be established.
ECDSA key fingerprint is SHA256:E+hbjEts20+OKoDjXyTooSBzsAL3f67oMF4aBuLpml0.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
ubuntu@10.20.20.3's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh '10.20.20.3'"
and check to make sure that only the key(s) you wanted were added.
====================================================================
ubuntu@oam-deploy-host:~$ ssh-copy-id 10.20.20.4
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ubuntu/.ssh/id_rsa.pub"
The authenticity of host '10.20.20.4 (10.20.20.4)' can't be established.
ECDSA key fingerprint is SHA256:yR3DDdFU2jpbMqzUTQ6ha18rAvenNtAhD9INXIod3GU.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
ubuntu@10.20.20.4's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh '10.20.20.4'"
and check to make sure that only the key(s) you wanted were added.

====================================================================
ubuntu@oam-deploy-host:~$
ubuntu@oam-deploy-host:~$ ssh-copy-id 10.20.20.5
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ubuntu/.ssh/id_rsa.pub"
The authenticity of host '10.20.20.5 (10.20.20.5)' can't be established.
ECDSA key fingerprint is SHA256:cZVW94cZ2/vhYvYKYKEYksXsFjoQixm45kVSXdq8I+I.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
ubuntu@10.20.20.5's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh '10.20.20.5'"
and check to make sure that only the key(s) you wanted were added.

You should be able to login to any of kubernetes nodes without entering your password now

Clone the Kubespray git repo:

ubuntu@oam-deploy-host:~$ 
ubuntu@oam-deploy-host:~$ git clone https://github.com/kubernetes-sigs/kubespray.
Cloning into 'kubespray'...
remote: Enumerating objects: 57, done.
remote: Counting objects: 100% (57/57), done.
remote: Compressing objects: 100% (49/49), done.
remote: Total 31989 (delta 24), reused 15 (delta 7), pack-reused 31932
Receiving objects: 100% (31989/31989), 9.24 MiB | 1.04 MiB/s, done.
Resolving deltas: 100% (17812/17812), done.

Change into the Kubespray directory and install the requirements that is needed:

ubuntu@oam-deploy-host:~$ cd kubespray/
ubuntu@oam-deploy-host:~/kubespray$
ubuntu@oam-deploy-host:~/kubespray$ sudo pip install -r requirements.txt
The directory '/home/ubuntu/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip .
The directory '/home/ubuntu/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with su.
Collecting ansible!=2.7.0,>=2.5.0 (from -r requirements.txt (line 1))
Downloading https://files.pythonhosted.org/packages/56/fb/b661ae256c5e4a5c42859860f59f9a1a0b82fbc481306b30e3c5159d519d/ansible-2.7.5.tar.gz (11.8MB)
100% |████████████████████████████████| 11.8MB 81kB/s
Collecting jinja2>=2.9.6 (from -r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl (126kB)
100% |████████████████████████████████| 133kB 1.2MB/s
Collecting netaddr (from -r requirements.txt (line 3))
Downloading https://files.pythonhosted.org/packages/ba/97/ce14451a9fd7bdb5a397abf99b24a1a6bb7a1a440b019bebd2e9a0dbec74/netaddr-0.7.19-py2.py3-none-any.whl (1.6MB)
100% |████████████████████████████████| 1.6MB 618kB/s
Collecting pbr>=1.6 (from -r requirements.txt (line 4))
Downloading https://files.pythonhosted.org/packages/f3/04/fddc1c2dd75b256eda4d360024692231a2c19a0c61ad7f4a162407c1ab58/pbr-5.1.1-py2.py3-none-any.whl (106kB)
100% |████████████████████████████████| 112kB 1.0MB/s


.......

Installing collected packages: PyYAML, MarkupSafe, jinja2, pycparser, cffi, pynacl, pyasn1, bcrypt, paramiko, ansible, netaddr, pbr, urllib3, certifi, chardet, requests, hvac
Running setup.py install for PyYAML ... done
Running setup.py install for pycparser ... done
Running setup.py install for ansible ... done
Successfully installed MarkupSafe-1.1.0 PyYAML-3.13 ansible-2.7.5 bcrypt-3.1.5 certifi-2018.11.29 cffi-1.11.5 chardet-3.0.4 hvac-0.7.1 jinja2-2.10 netaddr-0.7.19 paramiko-2.4.2 pbr-5.1.1 pyasn1-0.4.4 pycparser-1

Copy the sample inventory file which contains also our host file, the inventory folder also contains other files that you can use to customize your kubernetes installation:

# Copy ``inventory/sample`` as ``inventory/mycluster``
cp -rfp inventory/sample inventory/mycluster

Modify the hosts.ini like this:

# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
# node1 ansible_host=95.54.0.12 # ip=10.3.0.1 etcd_member_name=etcd1
# node2 ansible_host=95.54.0.13 # ip=10.3.0.2 etcd_member_name=etcd2
# node3 ansible_host=95.54.0.14 # ip=10.3.0.3 etcd_member_name=etcd3
# node4 ansible_host=95.54.0.15 # ip=10.3.0.4 etcd_member_name=etcd4
# node5 ansible_host=95.54.0.16 # ip=10.3.0.5 etcd_member_name=etcd5
# node6 ansible_host=95.54.0.17 # ip=10.3.0.6 etcd_member_name=etcd6

# ## configure a bastion host if your nodes are not directly reachable
# bastion ansible_host=x.x.x.x ansible_user=some_user

[kube-master]
10.20.20.3

[etcd]
10.20.20.3

[kube-node]
10.20.20.4
10.20.20.5

[k8s-cluster:children]
kube-master
kube-node

We are using one for both master and etcd, this is okay in a Demo like this, in production, use at least 3 (make sure the number of ETCD nodes are odd because ETCD this is very important to avoid instability in your cluster, more info on this https://coreos.com/etcd/docs/latest/v2/admin_guide.html)

Start the kubernetes Installation!

ansible-playbook -i inventory/mycluster/hosts.ini --become --become-user=root cluster.yml

This will take some time, you should get the following prompt if the installation completed successfully:

PLAY RECAP *********************************************************************
10.20.20.3 : ok=400 changed=121 unreachable=0 failed=0
10.20.20.4 : ok=253 changed=73 unreachable=0 failed=0
10.20.20.5 : ok=252 changed=73 unreachable=0 failed=0
localhost : ok=1 changed=0 unreachable=0 failed=0

Wednesday 26 December 2018 15:35:31 +0000 (0:00:00.026) 0:29:00.920 ****
===============================================================================
download : file_download | Download item ------------------------------ 378.13s
bootstrap-os : Bootstrap | Install python 2.x and pip ----------------- 233.86s
download : container_download | download images for kubeadm config images - 160.98s
download : container_download | Download containers if pull is required or told to always pull (all nodes) - 122.78s
container-engine/docker : ensure docker packages are installed --------- 94.66s
download : file_download | Download item ------------------------------- 77.17s
download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 69.15s
download : file_download | Download item ------------------------------- 59.43s
download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 47.40s
download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 46.75s
download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 34.02s
kubernetes/master : kubeadm | Initialize first master ------------------ 30.94s
download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 27.74s
download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 24.85s
kubernetes/preinstall : Install packages requirements ------------------ 17.22s
download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 16.46s
download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 11.94s
network_plugin/calico : Calico | Copy cni plugins from calico/cni container -- 10.98s
container-engine/docker : Docker | pause while Docker restarts --------- 10.12s
kubernetes/kubeadm : Restart all kube-proxy pods to ensure that they load the new configmap --- 8.62s

To confirm kubernetes cluster health:

First of all SSH into the master:

ssh 10.20.20.3

Create kube folder and change to the directory:

mkdir ~/.kube && cd ~/.kube

Copy /etc/kubernetes/admin.conf to ~/.kube:

sudo cp /etc/kubernetes/admin.conf ./config

Change the ownership of the file:

sudo chown ubuntu:ubuntu config

Run basic kubectl commands to confirm the status of your cluster:

ubuntu@10:~/.kube$ kubectl get no
NAME STATUS ROLES AGE VERSION
10.20.20.3 Ready master 9m8s v1.13.1
10.20.20.4 Ready node 8m48s v1.13.1
10.20.20.5 Ready node 8m48s v1.13.1


ubuntu@10:~/.kube$ kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}


ubuntu@10:~/.kube$ kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-588d66d49-mbswb 1/1 Running 0 9m52s
kube-system calico-node-gz6ws 1/1 Running 0 9m58s
kube-system calico-node-rq87v 1/1 Running 0 9m58s
kube-system calico-node-vz5rs 1/1 Running 0 9m58s
kube-system coredns-6fd7dbf94c-7fnql 1/1 Running 0 9m25s
kube-system coredns-6fd7dbf94c-nd2hf 1/1 Running 0 9m20s
kube-system dns-autoscaler-5b4847c446-48tcd 1/1 Running 0 9m22s
kube-system kube-apiserver-10.20.20.3 1/1 Running 0 10m
kube-system kube-controller-manager-10.20.20.3 1/1 Running 0 10m
kube-system kube-proxy-n9s7q 1/1 Running 0 9m32s
kube-system kube-proxy-x7pfr 1/1 Running 0 9m35s
kube-system kube-proxy-xpglc 1/1 Running 0 10m
kube-system kube-scheduler-10.20.20.3 1/1 Running 0 10m
kube-system kubernetes-dashboard-57bf7b5bf6-fgtqm 1/1 Running 0 9m20s
kube-system nginx-proxy-10.20.20.4 1/1 Running 0 10m
kube-system nginx-proxy-10.20.20.5 1/1 Running 0 10m

Copy the kubeconfig file to the deployment node since this is supposed to be our OAM system:

ubuntu@oam-deploy-host:~$ mkdir .kube
ubuntu@oam-deploy-host:~$
ubuntu@oam-deploy-host:~$
ubuntu@oam-deploy-host:~$ scp 10.20.20.3:~/.kube/config .kube/
config 100% 5462 5.4MB/s 00:00
ubuntu@oam-deploy-host:~$
ubuntu@oam-deploy-host:~$

## Install kubectl on the deployment node
ubuntu@oam-deploy-host:~$ sudo snap install kubectl --classic
2018-12-26T15:50:17Z INFO Waiting for restart...
kubectl 1.13.1 from 'canonical' installed

## Run kubectl to confirm you can reach the Kube Master API

ubuntu@oam-deploy-host:~$ kubectl get no
NAME STATUS ROLES AGE VERSION
10.20.20.3 Ready master 17m v1.13.1
10.20.20.4 Ready node 17m v1.13.1
10.20.20.5 Ready node 17m v1.13.1

ubuntu@oam-deploy-host:~$ kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}

ubuntu@oam-deploy-host:~$ kubectl get namespace
NAME STATUS AGE
default Active 19m
kube-public Active 19m
kube-system Active 19m

All our kubernetes operations will be carried out from the deployment node as from now on.

We have covered some grounds in this first part, if you were able to set this up, kudos to you especially if you have little experience installing a kubernetes cluster.

In the second part, we will proceed with installing Metallb Load Balancer software, Nginx Ingress Controller and testing the installation.

Thanks for your time! Comments/suggestions are welcome.

My name is Christopher Adigun, currently a Devops Engineer based in Germany.

P.S — If you want to learn about baremetal/on-prem kubernetes networking in depth using technologies like Multus, SR-IOV, Multiple network interfaces in pods etc then you can check my course on Udemy (you can use the coupon code on the course page to get a discount):

https://www.udemy.com/course/kubernetes-baremetal-networking-using-gns3/?referralCode=99D5F4AAFCF769E8DEB6

Twitter handle: @futuredon

--

--

Responses (1)