Kubernetes On-Prem Demo Using GNS3, Kubespray, Nginx Ingress, FRR And Metallb — Part 2

Christopher Adigun
8 min readJan 1, 2019

--

The first part of this series (​ https://medium.com/@futuredon/kubernetes-on-prem-demo-using-gns3-kubespray-nginx-ingress-frr-and-metallb-part-1-4b872a6fa89e) dealt with installing the kubernetes cluster using the open source project kubespray. It should be noted that I have left the aspect of internet access out, this depends on the way you want to achieve this but if you are curious as to how I set this up, I used the FRR as the default gateway for all the VMs, then I used IPTABLES to NAT (Masquerade) all traffic from the subnet 10.20.20.0/24 to 192.168.30.1 (this is the FRR interface that is connected to the internet cloud )

As a reminder below is the scenario:

Slight correction, default GW for the FRR is 192.168.30.1 (not 192.168.100.1).

Now we shall proceed with the installation of MetallB and Nginx Ingress controller.

We will be using the OAM/Deployment node from now to manage the kubernetes cluster.

STEP ONE — MetallB — On-Prem Load Balancer Solution For Kubernetes

Get the yaml file, it is a go practice to check yaml files first before applying on your cluster.

wget https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml

This will deploy MetalLB to your cluster, under the metallb-system namespace. The components in the manifest are:

  • The metallb-system/controller deployment. This is the cluster-wide controller that handles IP address assignments.
  • The metallb-system/speaker daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable.
  • Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function.
kubectl apply -f metallb.yaml

Check the status of the pods and make sure they are all in the running state.

ubuntu@oam-deploy-host:~$ kubectl -n metallb-system get po -o wide
NAME READY STATUS RESTARTS IP NODE
controller-7cc9c87cfb-jgxkv 1/1 Running 0 10.233.86.135 10.20.20.4
speaker-5w6kh 1/1 Running 0 10.20.20.5 10.20.20.5
speaker-xc2rw 1/1 Running 0 10.20.20.4 10.20.20.4

The speaker PODs are the ones that will form BGP peering with the FRR. Each worker node has a speaker, the default installation method of metallb is a Daemonset (this creates a pod on every worker-nodes in a kubernetes cluster by default, though this can be further fine-tuned with node specific labels, you can see the kubernetes docs for more information).

At this stage, we need to specify which mode we want to use, this will be done by creating a configmap that will tell metallB which mode to use.

MetallB has two modes of operation: Layer 2 and BGP mode

This is explained in great detail on MetallB’s website:

Layer 2 — https://metallb.universe.tf/concepts/layer2/

BGP Mode — https://metallb.universe.tf/concepts/bgp/

With the BGP mode, any load balancer service type IP will be advertised by MetalLB to the site router which is FRR in our situation, since we have 2 worker nodes, the route for each Load Balancer IP will be ECMP i.e. both worker nodes will have equal cost for that route, both of them will be used to handle the traffic.

Content of the configmap that we will be using is given below:

apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
peers:
- my-asn: 64500
peer-asn: 64500
peer-address: 10.20.20.1
address-pools:
- name: my-ip-space
protocol: bgp
addresses:
- 172.19.100.0/24

10.20.20.1 — This is the IP address of the FRR router (BGP will be formed using this interface)

172.19.100.0/24 — This is the IP address pool that will be allocated when kubernetes creates a Load Balancer service type.

Copy the content above into a file with a name like config-metallb.yaml and apply it on the kubernetes cluster.

kubectl apply -f config-metallb.yaml

Also our configuration on the FRR:

Current configuration:
!
frr version 4.0
frr defaults traditional
hostname frr
username cumulus nopassword
!
service integrated-vtysh-config
!
interface ens3
ip address 10.20.20.1/24
!
interface ens4
ip address 192.168.30.2/24
!
router bgp 64500
coalesce-time 1000
neighbor 10.20.20.4 remote-as 64500
neighbor 10.20.20.5 remote-as 64500
!
ip route 0.0.0.0/0 192.168.30.1
!
line vty
!
end

neighbor 10.20.20.4 remote-as 64500 — — Kubernetes worker-node-1 neighbor 10.20.20.5 remote-as 64500 — — Kubernetes worker-node-2

After loading the configmap, you can check the logs in one of the speaker PODs, you should see them adding the FRR (10.20.20.1) to their BGP process.

{"caller":"main.go:271","configmap":"metallb-system/config","event":"startUpdate","msg":"start of config update","ts":"2019-01-01T20:32:40.151949508Z"}
{"caller":"bgp_controller.go:151","configmap":"metallb-system/config","event":"peerAdded","msg":"peer configured, starting BGP session","peer":"10.20.20.1","ts":"2019-01-01T20:32:40.152083042Z"}
{"caller":"main.go:295","configmap":"metallb-system/config","event":"endUpdate","msg":"end of config update","ts":"2019-01-01T20:32:40.152203903Z"}
{"caller":"k8s.go:346","configmap":"metallb-system/config","event":"configLoaded","msg":"config (re)loaded","ts":"2019-01-01T20:32:40.152227002Z"}

You can also check the FRR for the BGP neighbors:

rr# show bgp summary 

IPv4 Unicast Summary:
BGP router identifier 192.168.30.2, local AS number 64500 vrf-id 0
BGP table version 2
RIB entries 0, using 0 bytes of memory
Peers 2, using 39 KiB of memory

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
10.20.20.4 4 64500 12 22 0 0 0 00:05:25 0
10.20.20.5 4 64500 12 22 0 0 0 00:05:25 0

Total number of neighbors 2

FRR has formed neighbor-ship with our kubernetes-worker nodes!

But if you check the routes on FRR, you will not see any of the Load Balancer Service Type IP ranges yet:

Why? Because we have not created a load balancer service type in our cluster, once we create one, the speaker PODs will advertise the IP to the FRR.

STEP TWO — Deploy Nginx Ingress Controller

N.B — We can skip this all together by exposing our applications directly via a load balancer service type but putting an ingress in front of our applications gives benefits like fine controlled access, TLS termination, get some performance statistics etc. Also if you are in Public cloud, having an ingress can save you lots of money by using just one load balancer with the ingress controller. Adding an ingress is entirely up to you.

Get the ingress yaml from the kubernetes portal:

https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

This contains some need configmaps, RBAC rules, deployment manifest.

Install on the cluster:

kubectl apply -f mandatory.yaml

ubuntu@oam-deploy-host:~$ kubectl -n ingress-nginx get pod
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-766c77b7d4-pz6jk 1/1 Running 0 3m22s

Create a load balancer service type (call it ingress-svc.yaml)for the Ingress controller:

kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
ubuntu@oam-deploy-host:~$ kubectl apply -f ingress-svc.yaml

Now check the status of the Nginx Ingress controller service, it should have a load balancer IP assigned by MetalLB!

buntu@oam-deploy-host:~$ kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
ingress-nginx LoadBalancer 10.233.36.125 172.19.100.0 80:32597/TCP,443:32405/TCP

The load balancer section is represented by the EXTERNAL-IP

This is the same behavior when you create a load balancer service type in public clouds!

Now let’s check FRR routing information:

frr# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
O - OSPF, I - IS-IS, B - BGP, P - PIM, E - EIGRP, N - NHRP,
T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
> - selected route, * - FIB route

S>* 0.0.0.0/0 [1/0] via 192.168.30.1, ens4, 00:41:35
C>* 10.20.20.0/24 is directly connected, ens3, 01:07:20
B>* 172.19.100.0/32 [200/0] via 10.20.20.4, ens3, 00:03:43
* via 10.20.20.5, ens3, 00:03:43
C>* 192.168.30.0/24 is directly connected, ens4, 00:41:08

We can see that FRR now has route for the Load Balancer IP:

B>* 172.19.100.0/32 [200/0] via 10.20.20.4, ens3, 00:03:43

‘*’ via 10.20.20.5, ens3, 00:03:43

Now we deploy a sample APP, we can use a normal nginx deployment (don’t confuse this with the ingress controller, this is just a normal web server), and then expose it via the ingress and test it via our FIREFOX VM.

Nginx deployment manifest (nginx-deploy.yaml):

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx

Ingress route config (you can call the file nginx-ingress-route.yaml):

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /test
backend:
serviceName: my-nginx
servicePort: 80

Install them in the kubernetes cluster:

ubuntu@oam-deploy-host:~$ kubectl apply -f nginx-deploy.yaml
================================================================================
kubectl get pod
NAME READY STATUS RESTARTS AGE
my-nginx-64fc468bd4-l54kj 1/1 Running 0 34s
my-nginx-64fc468bd4-zk798 1/1 Running 0 86m
================================================================================
ubuntu@oam-deploy-host:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 6d5h
my-nginx ClusterIP 10.233.2.95 <none> 80/TCP 92s
================================================================================
ubuntu@oam-deploy-host:~$ kubectl apply -f nginx-ingress-route.yaml
================================================================================
ubuntu@oam-deploy-host:~$ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
nginx-app-ingress * 172.19.100.0 80 69s
================================================================================
ubuntu@oam-deploy-host:~$ kubectl describe ing nginx-app-ingress
Name: nginx-app-ingress
Namespace: default
Address: 172.19.100.0
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/test my-nginx:80 (<none>)

Now we test this on the FIREFOX VM by going to the address http://172.19.100.0/test

So we can see that the application is available via the ingress (i.e. using Load Balancer IP that was allocated by MetalLB).

We have covered a lot of grounds in this post, if you are interested more in learning kubernetes or becoming a CKA or CKAD, then you join the kubernauts channels:

Website: https://kubernauts.io/en/

Slack Channel: https://kubernauts-slack-join.herokuapp.com/

Also there will be Kubernetes conference, the first of it’s kind in Cologne Germany this coming February 6 -8 2019, details below

https://kubecologne.io/

https://twitter.com/kubernauts/status/1080189487685267457

This will cover intensive kubernetes training on February 6 & 7, this will give you the thorough necessary background so as to attempt the CKA or CKAD exams.

**REFERENCES**

MetalLB Layer 2 — https://metallb.universe.tf/concepts/layer2/

MetalLB BGP Mode — https://metallb.universe.tf/concepts/bgp/

Kubernetes deployments: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

kubernetes daemonsets: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

kubernetes ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/

kubernetes Nginx ingress controller: https://github.com/kubernetes/ingress-nginx

GNS3: https://www.gns3.com/

P.S — If you want to learn about baremetal/on-prem kubernetes networking in depth using technologies like Multus, SR-IOV, Multiple network interfaces in pods etc then you can check my course on Udemy (you can use the coupon code on the course page to get a discount):

https://www.udemy.com/course/kubernetes-baremetal-networking-using-gns3/?referralCode=99D5F4AAFCF769E8DEB6

Twitter handle: @futuredon

--

--