Recently, the srsRAN Project transitioned into OCUDU under the umbrella of the Linux Foundation. Along with this transition came changes in governance, licensing, and long-term direction toward carrier-grade deployments.
As part of this transition, I refactored the OCUDU gNB Helm chart to better reflect the new scope of the project. The refactor focuses on making deployment choices explicit, improving structure, and supporting scenario-driven configurations rather than a single fixed setup.
This blog series OCUDU on Kubernetes builds on that work. The goal is to introduce the ocudu-gnb Helm Chart, explain it’s features, and then show how those features can be used in practice through concrete deployment scenarios.
What OCUDU is
OCUDU stands for Open Centralized Unit / Distributed Unit. It is an open-source CU/DU platform for 5G and future 6G RAN deployments.
OCUDU focuses explicitly on the CU and DU layers of the RAN. It is not a full end-to-end mobile network stack, not an SMO, and not an orchestration framework. It is a foundational building block intended to be integrated into larger systems.
The project is hosted under the Linux Foundation, with the goal of providing a neutral, carrier-grade CU/DU implementation that can be used by industry, research, and public institutions alike.
From srsRAN Project to OCUDU
SRS have renamed the srsRAN Project as OCUDU and continue to develop the software under contract with the National Spectrum Consortium (NSC).
The official repository for Helm charts and related code is available at:
https://gitlab.com/ocudu
One of the main changes is the license transition from AGPLv3 to BSD-3-Clause-Open-MPI. The focus of the project has shifted over the years from primarily R&D use cases toward enterprise and carrier-grade deployments.
OCUDU represents a transition of this proven codebase into a different model:
- neutral governance under the Linux Foundation
- permissive licensing to enable wide adoption
- a long-term roadmap oriented toward carrier-grade use cases
You will often see OCUDU described as the Linux of RAN. Technically, this means it aims to play the role of a common, open base layer. Not a product, not a vendor solution, but infrastructure that others can build on, extend, and integrate.
OCUDU is intentionally scoped. It does not try to solve every problem in a mobile network. It focuses on doing CU and DU well, and on doing so in a way that is sustainable and interoperable.
Why Kubernetes for OCUDU
I like deploying RAN software on Kubernetes, and this series reflects that preference. Kubernetes is not a requirement for OCUDU, and it is not always the right choice. Bare-metal deployments are still common in production environments, and for good reasons.
Bare-metal setups introduce fewer abstraction layers and can be simpler to bring up initially. They also offer very direct control over hardware, which is important for RAN workloads. At the same time, bare-metal deployments are harder to orchestrate and scale operationally, especially as the network grows. Managing configuration changes, rollouts, and multiple environments quickly becomes manual and error-prone.
From a performance perspective, both bare-metal and Kubernetes deployments can be predictable, but Kubernetes requires more careful tuning. CPU pinning, hugepages, NUMA alignment, and networking configuration must be done correctly to achieve comparable results. I cover these aspects in more detail in my other blog series Building a Telco Test Lab Using srsRAN, where I describe how to build and tune a telco deployment/lab from scratch.
Kubernetes provides clear advantages when it comes to reproducibility and iteration speed. Deployments are declarative, environments can be recreated consistently, and changes can be rolled out much faster than in traditional bare-metal setups. Automation and lifecycle management are built in, which becomes increasingly valuable once you move beyond a single node or a single configuration.
In my experience, the tradeoff is that Kubernetes is harder to set up initially, but once it is in place, it saves a significant amount of time and operational overhead. It makes complex systems easier to reason about, modify, and reproduce.
That is why I use Kubernetes as a vehicle to explore OCUDU in this series. The goal is not to claim that Kubernetes solves RAN complexity, but to make OCUDU more approachable by showing how it behaves in concrete, reproducible Kubernetes-based setups.
Helm chart architecture and capabilities
The ocudu-gnb Helm chart is designed as a deployment interface that exposes the main architectural decisions involved in running an OCUDU gNB on Kubernetes. Instead of assuming a single deployment model, the chart provides explicit options for networking, connectivity, observability, and management, and adapts the gNB configuration automatically based on those choices.
OFH interface configuration
One of the most important design choices when deploying a gNB is how the OFH (Open Fronthaul) interface is handled. The Helm chart supports two options.
Option A: hostNetwork
Using hostNetwork: true gives the gNB direct access to the host’s network interfaces. This approach is easy to configure and does not require any additional Kubernetes plugins. It is therefore well suited for test labs and early experimentation.
The downside is that this setup is not isolated. The pod effectively has access to the host network stack, which makes it unsuitable for production deployments from a security and isolation perspective.
Option B: SR-IOV Device Plugin
The recommended alternative is using the SR-IOV Device Plugin. This approach allows Kubernetes to pass through specific network interfaces to the gNB pod, instead of exposing the entire host network.
The SR-IOV Device Plugin can currently be used only for DPDK deployments, but this will be updated in the future. Compared to hostNetwork, it provides much stronger isolation and better aligns with production requirements, while still allowing precise control over which interface is assigned to the gNB.
When a network interface is assigned to the gNB at runtime, the gNB configuration must be updated accordingly. To handle this automatically, the Helm chart includes logic that detects whether the SR-IOV Device Plugin is in use. If so, the entrypoint script retrieves the BDF and MAC address of the assigned interface and injects them into the gNB configuration at startup.
This mechanism also supports multi-cell deployments, where multiple sectors are defined per DU and each requires its own interface configuration.
Exposing N2 and N3
In many setups, the 5G core runs outside of the Kubernetes cluster. In this case, the gNB needs to expose its N2 and N3 interfaces externally.
The Helm chart supports this by allowing N2 and N3 to be exposed via a LoadBalancer service. When a LoadBalancer is configured, the chart automatically updates the gNB configuration at runtime and inserts the assigned LoadBalancer IP addresses into the relevant sections. This update is handled by the entrypoint script and does not require manual configuration changes.
Alternatively, N2 and N3 can also be exposed using ClusterIP or NodePort, depending on the environment and connectivity requirements.
O1 interface and management plane
The Helm chart includes optional support for the O1 management interface.
The O1 architecture is implemented using two sidecar containers: a NETCONF server and an O1 adapter. The NETCONF server stores the management configuration and receives updates. When a configuration change occurs, it signals the O1 adapter.
The O1 adapter then retrieves the updated configuration, rewrites the gNB configuration file, and triggers a gNB restart. On restart, the standard entrypoint logic is executed again, ensuring that all runtime-specific adaptations, such as SR-IOV interface assignment, are applied consistently.
This design keeps management-plane logic separate from the gNB process itself and allows configuration changes to be applied in a controlled and reproducible way.
Metrics exposure
The gNB can expose runtime metrics via a TCP interface, covering multiple protocol layers. The Helm chart allows these metrics to be exposed using LoadBalancer, NodePort, or ClusterIP, depending on how and where they should be consumed. Metrics can be collected using the SRS Grafana Stack for visualization, or by custom tooling if preferred.
This makes it possible to monitor and compare different deployment scenarios without additional instrumentation.
Log handling and persistence
gNB logs are essential for debugging and validation, but they can also grow quickly if verbosity is increased.
To give users full control over log handling, the Helm chart supports multiple persistence options. Logs can be written to PVC-backed storage for more persistent setups, or to hostPath volumes for simpler test lab environments. This allows users to decide where logs are stored and how long they are retained, based on their operational needs.
By making log handling explicit and configurable, the chart avoids hardcoded assumptions and supports both lightweight experimentation and more controlled environments.
How to read the rest of this series
This post provides the conceptual and architectural context.
In the following posts, I will focus on concrete deployment scenarios using this Helm chart. Each post will cover one or two configurations in detail, explain why you would choose them, and highlight their tradeoffs.
The series starts with the default SR-IOV/DPDK deployment and then explores alternative deployment models, platform-specific environments, and management-plane integration, such as using the O1 interface.
Stay tuned!
[…] its part SRS has already open sourced its RAN code, which has been fully taken over by the LNF. This blog post from SRS Devops Engineer Nils Fürste explains, “The Linux Foundation took over the srsRAN Project codebase. The project was […]