Kubernetes On-Prem Demo Using GNS3, Kubespray, Nginx Ingress, FRR And Metallb — Part 2

Christopher Adigun
8 min readJan 1, 2019

The first part of this series (​ https://medium.com/@futuredon/kubernetes-on-prem-demo-using-gns3-kubespray-nginx-ingress-frr-and-metallb-part-1-4b872a6fa89e) dealt with installing the kubernetes cluster using the open source project kubespray. It should be noted that I have left the aspect of internet access out, this depends on the way you want to achieve this but if you are curious as to how I set this up, I used the FRR as the default gateway for all the VMs, then I used IPTABLES to NAT (Masquerade) all traffic from the subnet to (this is the FRR interface that is connected to the internet cloud )

As a reminder below is the scenario:

Slight correction, default GW for the FRR is (not

Now we shall proceed with the installation of MetallB and Nginx Ingress controller.

We will be using the OAM/Deployment node from now to manage the kubernetes cluster.

STEP ONE — MetallB — On-Prem Load Balancer Solution For Kubernetes

Get the yaml file, it is a go practice to check yaml files first before applying on your cluster.

wget https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml

This will deploy MetalLB to your cluster, under the metallb-system namespace. The components in the manifest are:

  • The metallb-system/controller deployment. This is the cluster-wide controller that handles IP address assignments.
  • The metallb-system/speaker daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable.
  • Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function.
kubectl apply -f metallb.yaml

Check the status of the pods and make sure they are all in the running state.

ubuntu@oam-deploy-host:~$ kubectl -n metallb-system get po -o wide
controller-7cc9c87cfb-jgxkv 1/1 Running 0
speaker-5w6kh 1/1 Running 0
speaker-xc2rw 1/1 Running 0

The speaker PODs are the ones that will form BGP peering with the FRR. Each worker node has a speaker, the default installation method of metallb is a Daemonset (this creates a pod on every worker-nodes in a kubernetes cluster by default, though this can be further fine-tuned with node specific labels, you can see the kubernetes docs for more information).

At this stage, we need to specify which mode we want to use, this will be done by creating a configmap that will tell metallB which mode to use.

MetallB has two modes of operation: Layer 2 and BGP mode

This is explained in great detail on MetallB’s website:

Layer 2 — https://metallb.universe.tf/concepts/layer2/

BGP Mode — https://metallb.universe.tf/concepts/bgp/

With the BGP mode, any load balancer service type IP will be advertised by MetalLB to the site router which is FRR in our situation, since we have 2 worker nodes, the route for each Load Balancer IP will be ECMP i.e. both worker nodes will have equal cost for that route, both of them will be used to handle the traffic.

Content of the configmap that we will be using is given below:

apiVersion: v1
kind: ConfigMap
namespace: metallb-system
name: config
config: |
- my-asn: 64500
peer-asn: 64500
- name: my-ip-space
protocol: bgp
- — This is the IP address of the FRR router (BGP will be formed using this interface) — This is the IP address pool that will be allocated when kubernetes creates a Load Balancer service type.

Copy the content above into a file with a name like config-metallb.yaml and apply it on the kubernetes cluster.

kubectl apply -f config-metallb.yaml

Also our configuration on the FRR:

Current configuration:
frr version 4.0
frr defaults traditional
hostname frr
username cumulus nopassword
service integrated-vtysh-config
interface ens3
ip address
interface ens4
ip address
router bgp 64500
coalesce-time 1000
neighbor remote-as 64500
neighbor remote-as 64500
ip route
line vty

neighbor remote-as 64500 — — Kubernetes worker-node-1 neighbor remote-as 64500 — — Kubernetes worker-node-2

After loading the configmap, you can check the logs in one of the speaker PODs, you should see them adding the FRR ( to their BGP process.

{"caller":"main.go:271","configmap":"metallb-system/config","event":"startUpdate","msg":"start of config update","ts":"2019-01-01T20:32:40.151949508Z"}
{"caller":"bgp_controller.go:151","configmap":"metallb-system/config","event":"peerAdded","msg":"peer configured, starting BGP session","peer":"","ts":"2019-01-01T20:32:40.152083042Z"}
{"caller":"main.go:295","configmap":"metallb-system/config","event":"endUpdate","msg":"end of config update","ts":"2019-01-01T20:32:40.152203903Z"}
{"caller":"k8s.go:346","configmap":"metallb-system/config","event":"configLoaded","msg":"config (re)loaded","ts":"2019-01-01T20:32:40.152227002Z"}

You can also check the FRR for the BGP neighbors:

rr# show bgp summary 

IPv4 Unicast Summary:
BGP router identifier, local AS number 64500 vrf-id 0
BGP table version 2
RIB entries 0, using 0 bytes of memory
Peers 2, using 39 KiB of memory

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 4 64500 12 22 0 0 0 00:05:25 0 4 64500 12 22 0 0 0 00:05:25 0

Total number of neighbors 2

FRR has formed neighbor-ship with our kubernetes-worker nodes!

But if you check the routes on FRR, you will not see any of the Load Balancer Service Type IP ranges yet:

Why? Because we have not created a load balancer service type in our cluster, once we create one, the speaker PODs will advertise the IP to the FRR.

STEP TWO — Deploy Nginx Ingress Controller

N.B — We can skip this all together by exposing our applications directly via a load balancer service type but putting an ingress in front of our applications gives benefits like fine controlled access, TLS termination, get some performance statistics etc. Also if you are in Public cloud, having an ingress can save you lots of money by using just one load balancer with the ingress controller. Adding an ingress is entirely up to you.

Get the ingress yaml from the kubernetes portal:


This contains some need configmaps, RBAC rules, deployment manifest.

Install on the cluster:

kubectl apply -f mandatory.yaml

ubuntu@oam-deploy-host:~$ kubectl -n ingress-nginx get pod
nginx-ingress-controller-766c77b7d4-pz6jk 1/1 Running 0 3m22s

Create a load balancer service type (call it ingress-svc.yaml)for the Ingress controller:

kind: Service
apiVersion: v1
name: ingress-nginx
namespace: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
type: LoadBalancer
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
ubuntu@oam-deploy-host:~$ kubectl apply -f ingress-svc.yaml

Now check the status of the Nginx Ingress controller service, it should have a load balancer IP assigned by MetalLB!

buntu@oam-deploy-host:~$ kubectl -n ingress-nginx get svc
ingress-nginx LoadBalancer 80:32597/TCP,443:32405/TCP

The load balancer section is represented by the EXTERNAL-IP

This is the same behavior when you create a load balancer service type in public clouds!

Now let’s check FRR routing information:

frr# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, P - PIM, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
> - selected route, * - FIB route

S>* [1/0] via, ens4, 00:41:35
C>* is directly connected, ens3, 01:07:20
B>* [200/0] via, ens3, 00:03:43
* via, ens3, 00:03:43
C>* is directly connected, ens4, 00:41:08

We can see that FRR now has route for the Load Balancer IP:

B>* [200/0] via, ens3, 00:03:43

‘*’ via, ens3, 00:03:43

Now we deploy a sample APP, we can use a normal nginx deployment (don’t confuse this with the ingress controller, this is just a normal web server), and then expose it via the ingress and test it via our FIREFOX VM.

Nginx deployment manifest (nginx-deploy.yaml):

apiVersion: apps/v1
kind: Deployment
name: my-nginx
run: my-nginx
replicas: 2
run: my-nginx
- name: my-nginx
image: nginx
- containerPort: 80
apiVersion: v1
kind: Service
name: my-nginx
run: my-nginx
- port: 80
protocol: TCP
run: my-nginx

Ingress route config (you can call the file nginx-ingress-route.yaml):

apiVersion: extensions/v1beta1
kind: Ingress
name: nginx-app-ingress
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
- http:
- path: /test
serviceName: my-nginx
servicePort: 80

Install them in the kubernetes cluster:

ubuntu@oam-deploy-host:~$ kubectl apply -f nginx-deploy.yaml
kubectl get pod
my-nginx-64fc468bd4-l54kj 1/1 Running 0 34s
my-nginx-64fc468bd4-zk798 1/1 Running 0 86m
ubuntu@oam-deploy-host:~$ kubectl get svc
kubernetes ClusterIP <none> 443/TCP 6d5h
my-nginx ClusterIP <none> 80/TCP 92s
ubuntu@oam-deploy-host:~$ kubectl apply -f nginx-ingress-route.yaml
ubuntu@oam-deploy-host:~$ kubectl get ing
nginx-app-ingress * 80 69s
ubuntu@oam-deploy-host:~$ kubectl describe ing nginx-app-ingress
Name: nginx-app-ingress
Namespace: default
Default backend: default-http-backend:80 (<none>)
Host Path Backends
---- ---- --------
/test my-nginx:80 (<none>)

Now we test this on the FIREFOX VM by going to the address

So we can see that the application is available via the ingress (i.e. using Load Balancer IP that was allocated by MetalLB).

We have covered a lot of grounds in this post, if you are interested more in learning kubernetes or becoming a CKA or CKAD, then you join the kubernauts channels:

Website: https://kubernauts.io/en/

Slack Channel: https://kubernauts-slack-join.herokuapp.com/

Also there will be Kubernetes conference, the first of it’s kind in Cologne Germany this coming February 6 -8 2019, details below



This will cover intensive kubernetes training on February 6 & 7, this will give you the thorough necessary background so as to attempt the CKA or CKAD exams.


MetalLB Layer 2 — https://metallb.universe.tf/concepts/layer2/

MetalLB BGP Mode — https://metallb.universe.tf/concepts/bgp/

Kubernetes deployments: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

kubernetes daemonsets: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

kubernetes ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/

kubernetes Nginx ingress controller: https://github.com/kubernetes/ingress-nginx

GNS3: https://www.gns3.com/

P.S — If you want to learn about baremetal/on-prem kubernetes networking in depth using technologies like Multus, SR-IOV, Multiple network interfaces in pods etc then you can check my course on Udemy (you can use the coupon code on the course page to get a discount):


Twitter handle: @futuredon