Kubernetes On-Prem Demo Using GNS3, Kubespray, Nginx Ingress, FRR And Metallb — Part 1

I started my IT career as a network engineer around 2003, back then I used GNS3 heavily to emulate both Cisco and Juniper devices but as the years rolled on I eventually switched to been a Linux/Infrastructure engineer, hence network engineering was no longer my main core area but I never stopped following, from time to time I still had to troubleshoot some IP/Switching/OSPF issues.

For about 2 years now my focus is now on Devops tool chain i.e. kubernetes, CI/CD, service mesh etc..

So this blog post is just to demonstrate how a kubernetes cluster can be deployed in an on-prem environment. I could have used Virtual-box/KVM entirely but the prospect of using GNS3 makes it very interesting, it makes creating a topology very easy (not to mention the support for KVM via qemu all within the same interface).

The scenario is given below:

For the Master and Worker nodes, I suggest you use at least 3 GB RAM, give more RAM to the Worker nodes (I used 5GB each).

Open-source projects that will be used is given below:

GNS3 — A full emulation/simulation system

Kubespray — A production grade kubernetes deployment solution that uses Ansible (https://github.com/kubernetes-sigs/kubespray)

FRR — Open source router (a fork of Quagga). The BGP aspect will be used. The GNS3 appliance was used for this.

Metallb — A kubernetes Load Balancer service type solution for on-prem environments (https://metallb.universe.tf/)

Ubuntu 18 Cloud image — This is the base OS that will be used to install both the kubernetes Master nodes and workers. The GNS3 appliance was used for this.

N.B — The default Ubuntu server GNS3 appliance default hard disk size is not sufficient for our demo, I had to increase the size with the command:

qemu-img resize ubuntu-18.04-server-cloudimg-amd64.img +28G

Nginx Ingress Controller — A layer 7 application proxy, the importance of this is to reduce the number of load balancers that you might need to expose your services, so instead of creating multiple load balancers, you can create one ingress controller with a Load Balancer service type, then expose your services using ingress.

Firefox Image — This appliance is used to test. The GNS3 appliance was used for this.

STEP ONE — Configure the Ubuntu 18 VMs with the proper IP addresses (static) and Hostname

Example of setting up the master (same steps can be used for the worker nodes):

hostnamectl set-hostname k8s-master-1

To set static IP, we can use netplan, first of all edit /etc/netplan/50-cloud-init.yaml:

Then apply using: sudo netplan apply

Check the IP using: ip address

Tip — To minimize start-up on the Ubuntu 18 image, I noticed it normally waits for some time while booting with the following message “A start job is running for wait for network to be configured. Ubuntu server 17.10

Ask Question

You can avoid this by disabling and masking the systemd-networkd-wait-online daemon:

Further info can be seen in the link included above.

STEP TWO — Prepare The Deployment Node (this is where we will initiate the kubernetes installation).

The deployment node is responsible for installing the kubernetes cluster, it will not be part of the cluster. Following steps can be followed:

Install necessary software utilities:

Generate SSH key pair that is needed for passwordless login:

STEP THREE — Prepare the kubernetes Nodes

Python is needed to be installed on the kubernetes nodes as Ansible depends on it:

Also generate SSH key pair that is needed for passwordless login:

STEP FOUR — Kubernetes Installation

In this step we need to clone the Kubespray repo, exchange SSH keys (for password-less login to the kubernetes nodes), create a host file that will be used by Kubespray (host file will contain the details of the nodes that we want to use i.e. Master, ETCD and Worker). All these will be done on the deployment Node.

Exchange SSH key pair with the kubernetes nodes (default password is Ubuntu):

You should be able to login to any of kubernetes nodes without entering your password now

Clone the Kubespray git repo:

Change into the Kubespray directory and install the requirements that is needed:

Copy the sample inventory file which contains also our host file, the inventory folder also contains other files that you can use to customize your kubernetes installation:

Modify the hosts.ini like this:

We are using one for both master and etcd, this is okay in a Demo like this, in production, use at least 3 (make sure the number of ETCD nodes are odd because ETCD this is very important to avoid instability in your cluster, more info on this https://coreos.com/etcd/docs/latest/v2/admin_guide.html)

Start the kubernetes Installation!

This will take some time, you should get the following prompt if the installation completed successfully:

To confirm kubernetes cluster health:

First of all SSH into the master:


Create kube folder and change to the directory:

mkdir ~/.kube && cd ~/.kube

Copy /etc/kubernetes/admin.conf to ~/.kube:

sudo cp /etc/kubernetes/admin.conf ./config

Change the ownership of the file:

sudo chown ubuntu:ubuntu config

Run basic kubectl commands to confirm the status of your cluster:

Copy the kubeconfig file to the deployment node since this is supposed to be our OAM system:

All our kubernetes operations will be carried out from the deployment node as from now on.

We have covered some grounds in this first part, if you were able to set this up, kudos to you especially if you have little experience installing a kubernetes cluster.

In the second part, we will proceed with installing Metallb Load Balancer software, Nginx Ingress Controller and testing the installation.

Thanks for your time! Comments/suggestions are welcome.

My name is Christopher Adigun, currently a Devops Engineer based in Germany.

P.S — If you want to learn about baremetal/on-prem kubernetes networking in depth using technologies like Multus, SR-IOV, Multiple network interfaces in pods etc then you can check my course on Udemy (you can use the coupon code on the course page to get a discount):


Twitter handle: @futuredon

5G core, Kubernetes