This chapter continues the Kubernetes deployment series by adding the core network and preparing the srsRAN gNB on top of the cluster created in Part 1. I assume the cluster is already configured with the CPU Manager settings from earlier chapters and that optional SR-IOV provider support is installed. In this post I deploy Open5GS as the core network and walk through the gNB integration on Kubernetes.
Open5GS on Kubernetes
I will use Open5GS as the 5G core network. It is fully open-source and the Helm charts are maintained by Gradiant on GitHub.
Helm charts: https://github.com/Gradiant/5g-charts
Persistence
There are two deployment options for the Open5GS Helm chart:
with persistent storage for the database, or purely ephemeral. If you do not care about persistence, you can skip this section. To enable persistent storage I use the following PV and PVC manifest:
kind: PersistentVolume
apiVersion: v1
metadata:
name: db-pv
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /mnt/influx/data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: open5gs-mongodb-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
This creates a 5 Gi persistent volume mounted at /mnt/influx/data on the host. The PVC requests 3 Gi and must match the claim name in your Open5GS values file.
Save it as pv-pvc-open5gs.yaml and apply:
kubectl apply -f ./pv-pvc-open5gs.yaml
Configuration
Before deploying, adjust the Open5GS values.yaml file to match your gNB settings. In the Open5GS folder inside the srsRAN Project Helm repo you will find an example values file.
Important fields:
- PLMN
- TAC
- Slicing
The defaults already match the example configs used in earlier chapters. If you kept the same values you can reuse the example values file as-is.
Example values + PV/PVC files:
Deployment
Deploy Open5GS using:
helm install open5gs -n 5gc oci://registry-1.docker.io/gradiant/open5gs --version 2.2.6 -f ./values-open5gs.yaml --create-namespace
You should see all components start inside the 5gc namespace. Some Pods may restart a few times initially. If after 5 minutes not all Pods are healthy, something is wrong. Refer to the chart’s GitHub repo for troubleshooting.
Configure Subscribers
Once Open5GS is running, add subscribers through the WebUI.
You can access it by port-forwarding the service and opening the UI in your browser.
At first I need the name of the Open5GS WebUI Pod.
kubectl get pods -n open5gs
Find the WebUI pod and forward the connection.
kubectl port-forward -n open5gs pod/<webui-pod-name> 3000:3000
Now you can access the WebUI via your browser under: http://127.0.0.1:3000. The username is admin and the password is 1423 by default. Add a new subscriber by clicking the plus on the bottom right. Enter IMSI, KI and OPC and save the entry.
If you want permanent access to the Open5GS WebUI without relying on port-forwarding, you can expose it using a NodePort service. This is configured directly in the open5gs-values.yaml file under the webui section. NodePorts must be chosen from the Kubernetes-reserved range 30000–32767. In the example below, the WebUI is exposed on port 30001 of the node.
webui:
enabled: true
services:
http:
type: NodePort
ports:
http: 9999
nodePorts:
http: 30001
Once applied, the WebUI will be reachable at http://<node-ip>:30001 from your browser.
For more info on how to program your own physical USIM cards have a look at Chapter 7 of this series.
Deploying the srsRAN gNB on Kubernetes
As with previous posts, I prefer cloning the Helm repo directly instead of using a Helm registry. This gives the newest updates and makes experimenting easier.
git clone https://github.com/srsran/srsRAN_Project_helm
Features of the srsRAN Helm Chart
The Helm chart supports:
- HostNetwork mode for direct NIC access (privileged)
- SR-IOV support when using the device plugin (non-privileged)
- Automatic MAC address replacement in the config
- Custom log handling so logs persist after Pod restarts
- Dynamic HAL arguments for DPDK usage
- Multi-cell support
- O1 interface support
- Exposing metrics on a TCP port
Configuration
Here are the main chart sections explained. I do not cover SMO or LoadBalancer settings here, only the parts needed for privileged and unprivileged gNB deployments. In following posts I will go into more detail on those two components.
network
Defines whether the Pod uses hostNetwork.
Using hostNetwork gives the container direct access to the NICs.
resources
Modify this if you want to use hugepages for DPDK.
You can use:
- 1G hugepages, or
- 2M hugepages
Set the amount of hugepages required for your gNB deployment.
metricsService
Configures external metrics (e.g. for Grafana).
Out of scope for this post.
debugging
Controls log handling:
enabled: Mounts the host log directory into the containerpreserveOldLogs: Rotates logs with timestampsstorageCapacity: Size of the log directoryhostPath: Path on the hostcontainerPath: Path inside container (usually/tmp)
o1
Config for the O1 interface.
Out of scope.
sriovConfig
Configuration for the SR-IOV provider:
- setting the
extendedResourceName - mapping resources to the Pod
config
srsRAN gNB configuration
- same syntax as the bare metal version
- copy and past your bare metal configs here
Understanding Kubernetes Hostnames
In Kubernetes, IPs are assigned dynamically and should not be set statically. Instead, services are addressed using DNS hostnames. This allows the deployment to continue working across restarts without manual reconfiguration.
Kubernetes resolves services using the following format:
{service-name}.{namespace}.svc.{cluster-domain}
The service name and namespace are usually obvious, but the cluster domain is less visible. You can discover it by inspecting a Pod:
kubectl exec -it <pod> -- cat /etc/resolv.conf
Look for a line starting with search, for example:
search default.svc.srsk8s.bcn svc.srsk8s.bcn localdomain
In this case, the cluster domain is srsk8s.bcn. A valid hostname would then look like:
open5gs-amf.open5gs.svc.srsk8s.bcn
Using the bare-metal gNB config
We can reuse the gNB configuration file created in Chapter 7. However, the AMF address must be adjusted. As discussed earlier, using hostnames instead of fixed IP addresses makes the deployment robust against restarts and dynamically changing IPs.
In my setup, the Open5GS AMF is deployed in the open5gs namespace. Its hostname is:
open5gs-amf.open5gs.svc.srsk8s.bcn
With this change, the resulting cu_cp section looks like this:
cu_cp:
amf:
addr: open5gs-amf.open5gs.svc.srsk8s.bcn
port: 38412
bind_addr: 127.0.0.100
supported_tracking_areas:
- tac: 7
plmn_list:
- plmn: "00101"
tai_slice_support_list:
- sst: 1
The bind_addr value will be replaced automatically by the gNB entrypoint script with the actual IP address of the container at runtime, so it does not need to be updated manually.
Another recommendation I have is to have autostart_stdout_metrics: enabled in the logs config section. This will automatically print the console metrics of the srsGNB when a user connects.
Privileged Deployment
This is the simplest deployment method, but also the least secure because the Pod has full host access.
Example values:
network:
hostNetwork: true
securityContext:
capabilities:
add: ["SYS_NICE", "NET_ADMIN"]
privileged: true
Unprivileged Deployment (SR-IOV Plugin)
To run the gNB without elevated privileges, you must install the SR-IOV Network Device Plugin as described in Chapter 8.
The extendedResourceName is constructed as resourcePrefix/resourceName, based on the values defined in your configMap.yaml from Chapter 8.
You do not need to configure the network_interface or du_mac_addr fields manually. Both values are automatically injected by the gNB entrypoint script at startup, using the interface and MAC address assigned by the SR-IOV plugin.
Example:
network:
hostNetwork: false
sriovConfig:
enabled: true
extendedResourceName: "intel.com/intel_sriov_dpdk"
resources:
limits:
intel.com/intel_sriov_dpdk: '1'
requests:
intel.com/intel_sriov_dpdk: '1'
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- IPC_LOCK
- SYS_ADMIN
- SYS_RAWIO
- NET_RAW
- SYS_NICE
privileged: false
Debugging Logs
Highly recommended during initial setup:
debugging:
enabled: true
preserveOldLogs: true
storageCapacity: "5Gi"
hostPath: "/mnt/debugging-logs"
containerPath: "/tmp"
This writes logs and config files to /mnt/debugging-logs on the host. Keep in mind that these logs can get very big over time.
Once the gNB is deployed I recommend to set the log level to warning and receive all metrics via the TCP socket. I will show this in a future post.
Here the resulting config:
Deploying the gNB
With everything in place, deploy the gNB:
helm install srsran-gnb ./ -n srsran --create-namespace -f ./srsran-values.yaml
Check Pod status:
kubectl get pods -n srsran
Tail logs:
kubectl logs -n srsran <gnb-pod-name> -f
When the config mounts correctly you should see the gNB initialize, load its configuration, and register to the AMF.
Summary
In this post I deployed Open5GS on Kubernetes and prepared the srsRAN gNB for deployment using the Helm charts. With these steps completed, your srsRAN setup on Kubernetes is now ready to run.
In the next chapter I’ll show how to connect Grafana to the gNB, both on bare metal and inside Kubernetes. Once the SMO becomes officially available through OCUDU, I’ll also publish a dedicated post about that.