5G Roaming With Mutual TLS

Christopher Adigun
8 min readDec 26, 2023

--

With the advent of some interesting features in the opensource 5G solutions of which is the recent announcement of Open5gs roaming support and the release of PacketRusher as a new gNB/UE simulator project also with support for roaming scenario, I decided to try to implement this in a Kubernetes environment. I will say this is by far one of the most complex setups I have attempted for a 5G blog post but then the end results are quite awesome, so kindly take note this requires a pretty decent knowledge of a 5G CloudNative setup.

First of all the current roaming architecture that is supported by the Open5gs is the LBO (Local Break Out) option, the HR (home routing) might be available in a future release.

In LBO mode, the Home PLMN components basically includes AUSF,UDM,UDR,SCP,NRF and SEPP. The Visiting PLMN includes all the complete 5G components, the additional element been the SEPP.

SEPP (Security Edge Protection Proxy) is the main conduit between the V-PLMN and H-PLMN, usually the connectivity between the SEPPs will be handled by a roaming partner but for this blog post for obvious reasons this will not be used as this is just a test setup.

Below is the test architecture that was used. For purpose of simplicity, a single node Kubernetes cluster was used.

Mutual TLS 5G Roaming

Summary of the software components that I used:

Microk8s — This actually made easier to deploy a single node Kubernetes cluster, it makes it easier to add plugins like hostPath storage-class, Multus. The Kubernetes distro is not a major point here, you can very much use any other distros, just that microk8s suits my purpose.

Hashicorp Vault — This is used to provide the required mTLS certificates for all the 5G components. The method of delivery is Kubernetes secrets, you can also use the CSI mode but take note that each time the Pods are re-deployed a different CSR will be generated which will require that a new certificate be generated all the time, with the secrets method, the certificates generation is independent of the Pod lifecycle. If your certificate generation requires pricing then the secret mode definitely makes sense otherwise the CSI mode can be used (which might make the configuration easier since there is no need to actually create the Certificate custom resource). You should take time to analyze the two modes and see which one works well for your case. Also if you are going the route of using the secret mode, you can create a separate helm chart just for the TLS certificates generation to separate the lifecycle management of the actual 5G pods. The actual Vault installation procedure is beyond the scope of this blog post, you can check the reference section at the end of this blog post for the link to the relevant Vault documentation.

Multus — For this post, I decided to go all the way for to use multus for the SBIs, this is not a strict requirement but it made troubleshooting easier as I know which IP to use to trace issues (N.B some primary CNIs supports static IP).

coreDNS — Reason for adding this separately is because I got to learn about some added features which came in handy, remember that all the SBIs are using multus interfaces, so how do I resolve the 5G FQDN names to the individual multus IPs? One way is to use static mappings in the Kubernetes manifests but this will make it a bit complicated. coreDNS has the option of serving custom DNS zones, so this will provide a single place to control the DNS resolutions. Also I made use of FQDN for all the SBIs. Below is the snapshot of the additonal coreDNS entries:

Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
log . {
class error
}
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
5gc.mnc070.mcc999.3gppnetwork.org:53 {
errors
hosts {
192.168.1.14 ausf.5gc.mnc070.mcc999.3gppnetwork.org
192.168.1.15 udm.5gc.mnc070.mcc999.3gppnetwork.org
192.168.1.17 nrf.5gc.mnc070.mcc999.3gppnetwork.org
192.168.1.16 udr.5gc.mnc070.mcc999.3gppnetwork.org
192.168.1.18 scp.5gc.mnc070.mcc999.3gppnetwork.org
192.168.1.19 sepp-sbi.5gc.mnc070.mcc999.3gppnetwork.org
}
}
5gc.mnc001.mcc001.3gppnetwork.org:53 {
errors
hosts {
192.168.1.1 amf.5gc.mnc001.mcc001.3gppnetwork.org
192.168.1.2 ausf.5gc.mnc001.mcc001.3gppnetwork.org
192.168.1.3 udm.5gc.mnc001.mcc001.3gppnetwork.org
192.168.1.4 udr.5gc.mnc001.mcc001.3gppnetwork.org
192.168.1.5 bsf.5gc.mnc001.mcc001.3gppnetwork.org
192.168.1.23 smf.5gc.mnc001.mcc001.3gppnetwork.org
192.168.1.6 nrf.5gc.mnc001.mcc001.3gppnetwork.org
192.168.1.8 scp.5gc.mnc001.mcc001.3gppnetwork.org
192.168.1.7 nssf.5gc.mnc001.mcc001.3gppnetwork.org
192.168.1.9 pcf.5gc.mnc001.mcc001.3gppnetwork.org
192.168.1.10 sepp-sbi.5gc.mnc001.mcc001.3gppnetwork.org
}
}

5gc.mnc070.mcc999.3gppnetwork.org is the H-PLMN FQDN

5gc.mnc001.mcc001.3gppnetwork.org is the V-PLMN FQDN

Also with the recent coreDNS applications, you don’t need to manually restart the coreDNS pods for the changes to take effect, this is done via the reload plugin (you can check this is the above snippet), the coreDNS application will reload the updated configmap in about 1 to 2 minutes time.

One of the most important configuration is the SEPPs as they are the entry points to the roaming traffic.

Visiting PLMN SEPP config is shown below:

sepp:
default:
tls:
server:
scheme: https
private_key: /tls-sbi/tls.key
cert: /tls-sbi/tls.crt
verify_client: true
verify_client_cacert: /tls-sbi/ca.crt
client:
scheme: https
cacert: /tls-sbi/ca.crt
client_private_key: /tls-sbi/tls.key
client_cert: /tls-sbi/tls.crt
sbi:
server:
- dev: sbi
advertise: sepp-sbi.5gc.mnc001.mcc001.3gppnetwork.org
client:
scp:
- uri: https://scp.5gc.mnc001.mcc001.3gppnetwork.org:7777
n32:
server:
- sender: sepp.5gc.mnc001.mcc001.3gppnetwork.org
scheme: https
address: 10.0.3.14
port: 443
private_key: /tls-ext/tls.key
cert: /tls-ext/tls.crt
verify_client: true
verify_client_cacert: /tls-ext/ca.crt
n32f:
scheme: https
address: 10.0.4.14
port: 443
private_key: /tls-ext/tls.key
cert: /tls-ext/tls.crt
verify_client: true
verify_client_cacert: /tls-ext/ca.crt
client:
sepp:
- receiver: sepp.5gc.mnc070.mcc999.3gppnetwork.org
uri: https://sepp.5gc.mnc070.mcc999.3gppnetwork.org
resolve: 10.0.3.15
client_private_key: /tls-ext/tls.key
client_cert: /tls-ext/tls.crt
cacert: /tls-ext/ca.crt
n32f:
uri: https://sepp.5gc.mnc070.mcc999.3gppnetwork.org
resolve: 10.0.4.15
cacert: /tls-ext/ca.crt
client_private_key: /tls-ext/tls.key
client_cert: /tls-ext/tls.crt

Home PLMN SEPP config is shown below:

sepp:
default:
tls:
server:
scheme: https
private_key: /tls-sbi/tls.key
cert: /tls-sbi/tls.crt
verify_client: true
verify_client_cacert: /tls-sbi/ca.crt
client:
scheme: https
cacert: /tls-sbi/ca.crt
client_private_key: /tls-sbi/tls.key
client_cert: /tls-sbi/tls.crt
sbi:
server:
- dev: sbi
advertise: sepp-sbi.5gc.mnc070.mcc999.3gppnetwork.org
client:
scp:
- uri: https://scp.5gc.mnc070.mcc999.3gppnetwork.org:7777
n32:
server:
- sender: sepp.5gc.mnc070.mcc999.3gppnetwork.org
scheme: https
address: 10.0.3.15
private_key: /tls-ext/tls.key
cert: /tls-ext/tls.crt
verify_client: true
verify_client_cacert: /tls-ext/ca.crt
n32f:
scheme: https
address: 10.0.4.15
private_key: /tls-ext/tls.key
cert: /tls-ext/tls.crt
verify_client: true
verify_client_cacert: /tls-ext/ca.crt
client:
sepp:
- receiver: sepp.5gc.mnc001.mcc001.3gppnetwork.org
uri: https://sepp.5gc.mnc001.mcc001.3gppnetwork.org
resolve: 10.0.3.14
client_private_key: /tls-ext/tls.key
client_cert: /tls-ext/tls.crt
cacert: /tls-ext/ca.crt
n32f:
uri: https://sepp.5gc.mnc001.mcc001.3gppnetwork.org
resolve: 10.0.4.14
cacert: /tls-ext/ca.crt
client_private_key: /tls-ext/tls.key
client_cert: /tls-ext/tls.crt

A line by line breakdown of the config is not feasible in a single blog post but the following parts should be taken note of as per the design consideration:

  • Different TLS certificates were used for the SBI and N32 interfaces. This is the usual security practice not to use the same certificates for the internal and external communication.
  • Separate interfaces were used for the N32c and N32f interfaces, one interface can be used but I just decided to try out separating the control and forwarding interfaces.

If everything is configured properly, you see the below status in each of the SEPP Pod logs:

V-PLMN SEPP:

kubectl -n vplmn-5gcore logs deploy/visiting-core-sepp-deployment
Defaulted container "sepp" out of: sepp, wait-for-nrf (init), wait-for-scp (init)
Open5GS daemon v2.7.0

12/25 22:41:32.350: [app] INFO: Configuration: '/open5gs/config-map/sepp.yaml' (../lib/app/ogs-init.c:130)
12/25 22:41:32.350: [app] INFO: File Logging: '/var/log/open5gs/sepp.log' (../lib/app/ogs-init.c:133)
12/25 22:41:32.357: [sbi] INFO: nghttp2_server() [https://192.168.1.10]:443 (../lib/sbi/nghttp2-server.c:414)
12/25 22:41:32.357: [sbi] INFO: nghttp2_server(n32f) [https://10.0.4.14]:443 (../lib/sbi/nghttp2-server.c:414)
12/25 22:41:32.358: [sbi] INFO: nghttp2_server(sepp) [https://10.0.3.14]:443 (../lib/sbi/nghttp2-server.c:414)
12/25 22:41:32.358: [app] INFO: SEPP initialize...done (../src/sepp/app.c:31)
12/25 22:41:32.416: [sbi] INFO: [c05816ee-a376-41ee-8a28-df60f60f3b18] NF registered [Heartbeat:10s] (../lib/sbi/nf-sm.c:221)
12/25 22:41:43.417: [sepp] WARNING: [sepp.5gc.mnc070.mcc999.3gppnetwork.org] Retry establishment with Peer SEPP (../src/sepp/handshake-sm.c:255)
12/25 22:41:43.417: [sbi] ERROR: Connection timer expired (../lib/sbi/client.c:605)
....
12/25 22:45:14.097: [sepp] INFO: [sepp.5gc.mnc070.mcc999.3gppnetwork.org] SEPP established (../src/sepp/handshake-sm.c:297)

H-PLMN SEPP:

kubectl -n hplmn-5gcore logs deploy/home-core-sepp-deployment
Defaulted container "sepp" out of: sepp, wait-for-nrf (init), wait-for-scp (init)
Open5GS daemon v2.7.0

12/25 22:45:13.948: [app] INFO: Configuration: '/open5gs/config-map/sepp.yaml' (../lib/app/ogs-init.c:130)
12/25 22:45:13.948: [app] INFO: File Logging: '/var/log/open5gs/sepp.log' (../lib/app/ogs-init.c:133)
12/25 22:45:13.958: [sbi] INFO: nghttp2_server() [https://192.168.1.19]:443 (../lib/sbi/nghttp2-server.c:414)
12/25 22:45:13.962: [sbi] INFO: nghttp2_server(n32f) [https://10.0.4.15]:443 (../lib/sbi/nghttp2-server.c:414)
12/25 22:45:13.966: [sbi] INFO: nghttp2_server(sepp) [https://10.0.3.15]:443 (../lib/sbi/nghttp2-server.c:414)
12/25 22:45:13.966: [app] INFO: SEPP initialize...done (../src/sepp/app.c:31)
12/25 22:45:14.049: [sbi] INFO: [446d4602-a377-41ee-bfb9-ffa815d42cd0] NF registered [Heartbeat:10s] (../lib/sbi/nf-sm.c:221)
12/25 22:45:14.098: [sepp] INFO: [sepp.5gc.mnc001.mcc001.3gppnetwork.org] SEPP established (../src/sepp/handshake-sm.c:297)

To confirm the roaming PLMN config, below is the snippets of the AMF and Packet-Rusher:

AMF:

access_control:
- plmn_id:
mcc: 001
mnc: 01
- plmn_id:
mcc: 999
mnc: 70
guami:
- plmn_id:
mcc: 001
mnc: 01
amf_id:
region: 2
set: 1
tai:
- plmn_id:
mcc: 001
mnc: 01
tac: 7
plmn_support:
- plmn_id:
mcc: 001
mnc: 01
s_nssai:
- sst: 1

Packet-Rusher:

gnodeb:
controlif:
ip: "10.0.3.16"
port: 9487
dataif:
ip: "10.0.3.16"
port: 2152
plmnlist:
mcc: "001"
mnc: "01"
tac: "000007"
gnbid: "000008"
slicesupportlist:
sst: "01"
sd: "ffffff"

ue:
msin: "0000000001"
key: "0C0A34601D4F07677303652C0462535B"
opc: "63bfa50ee6523365ff14c1f45f88737d"
amf: "8000"
sqn: "00000000"
dnn: "internet"
routingindicator: "4567"
hplmn:
mcc: "999"
mnc: "70"
snssai:
sst: 01
sd: "ffffff"

From above we can see from the packet-rusher config that the UE PLMN is different from the gNB. In the AMF, the main additional entry is in the access_control section that has the additional PLMN entry for the roaming subscriber.

Sample status log of the packet-rusher is shown below:

kubectl -n ran logs deploy/packet-rusher

Defaulted container "rusher" out of: rusher, blackexporter
time="2023-12-25T22:47:59Z" level=info msg="Loaded config at: /packetrusher/cmd/config/config.yml"
time="2023-12-25T22:47:59Z" level=info msg="PacketRusher version 1.0.1"
time="2023-12-25T22:47:59Z" level=info msg=---------------------------------------
time="2023-12-25T22:47:59Z" level=info msg="[TESTER] Starting test function: Testing an ue attached with configuration"
time="2023-12-25T22:47:59Z" level=info msg="[TESTER][UE] Number of UEs: 1"
time="2023-12-25T22:47:59Z" level=info msg="[TESTER][UE] disableTunnel is false"
time="2023-12-25T22:47:59Z" level=info msg="[TESTER][GNB] Control interface IP/Port: 10.0.3.16/9487"
time="2023-12-25T22:47:59Z" level=info msg="[TESTER][GNB] Data interface IP/Port: 10.0.3.16/2152"
time="2023-12-25T22:47:59Z" level=info msg="[TESTER][AMF] AMF IP/Port: 10.0.3.10/38412"
time="2023-12-25T22:47:59Z" level=info msg=---------------------------------------
time="2023-12-25T22:47:59Z" level=info msg="[GNB] SCTP/NGAP service is running"
time="2023-12-25T22:47:59Z" level=info msg="[GNB] Initiating NG Setup Request"
time="2023-12-25T22:47:59Z" level=info msg="[GNB][SCTP] Receive message in 0 stream\n"
time="2023-12-25T22:47:59Z" level=info msg="[GNB][NGAP] Receive NG Setup Response"
time="2023-12-25T22:47:59Z" level=info msg="[GNB][AMF] AMF Name: open5gs-amf"
time="2023-12-25T22:47:59Z" level=info msg="[GNB][AMF] State of AMF: Active"
time="2023-12-25T22:47:59Z" level=info msg="[GNB][AMF] Capacity of AMF: 255"
time="2023-12-25T22:47:59Z" level=info msg="[GNB][AMF] PLMNs Identities Supported by AMF -- mcc: 001 mnc:01"
time="2023-12-25T22:47:59Z" level=info msg="[GNB][AMF] List of AMF slices Supported by AMF -- sst:01 sd:was not informed"
time="2023-12-25T22:48:00Z" level=info msg="[TESTER] TESTING REGISTRATION USING IMSI 0000000001 UE"
time="2023-12-25T22:48:00Z" level=info msg="[UE] Initiating Registration"
time="2023-12-25T22:48:00Z" level=info msg="[UE] Switched from state 0 to state 1"
...
...
time="2023-12-25T22:48:01Z" level=info msg="[GNB][NGAP][UE] Priority Level ARP: 8"
time="2023-12-25T22:48:01Z" level=info msg="[GNB][NGAP][UE] UPF Address: 10.0.3.13 :2152"
time="2023-12-25T22:48:01Z" level=info msg="[GNB] Initiating PDU Session Resource Setup Response"
time="2023-12-25T22:48:01Z" level=info msg="[UE][NAS] Message with security header"
time="2023-12-25T22:48:01Z" level=info msg="[UE][NAS] Message with integrity and ciphered"
time="2023-12-25T22:48:01Z" level=info msg="[UE][NAS] successful NAS MAC verification"
time="2023-12-25T22:48:01Z" level=info msg="[UE][NAS] successful NAS CIPHERING"
time="2023-12-25T22:48:01Z" level=info msg="[UE][NAS] Receive DL NAS Transport"
time="2023-12-25T22:48:01Z" level=info msg="[UE][NAS] Receiving PDU Session Establishment Accept"
time="2023-12-25T22:48:01Z" level=info msg="[UE][NAS] PDU session QoS RULES: [1 0 6 49 49 1 1 255 1]"
time="2023-12-25T22:48:01Z" level=info msg="[UE][NAS] PDU session DNN: internet"
time="2023-12-25T22:48:01Z" level=info msg="[UE][NAS] PDU session NSSAI -- sst: 1 sd: 000"
time="2023-12-25T22:48:01Z" level=info msg="[UE][NAS] PDU address received: 10.45.0.2"
time="2023-12-25T22:48:02Z" level=info msg="[UE][GTP] Interface val0000000001 has successfully been configured for UE 10.45.0.2"
time="2023-12-25T22:48:02Z" level=info msg="[UE][GTP] You can do traffic for this UE using VRF vrf0000000001, eg:"
time="2023-12-25T22:48:02Z" level=info msg="[UE][GTP] sudo ip vrf exec vrf0000000001 iperf3 -c IPERF_SERVER -p PORT -t 9000"

In conclusion, I hope this motivates you to try this out in your lab, I bet you it will definitely be an awesome learning experience.

I want to express my appreciation to Valentin for his quick response in adding the roaming support to Packet-Rusher (I think currently this may be the only opensource gNB/UE simulator to support roaming).

As usual I also want to thank Sukchan Lee for his awesome work in adding the 5G roaming feature in the Open5gs project.

References

--

--