The deployment of Telecommunication Network Functions had always been a largely manual process until the advent of 5th Generation Technology (5G). 5G requires that network functions be moved from a monolithic architecture toward modularized and containerized patterns. This opened up the possibility of introducing DevOps-based deployment principles (which are well-established and adopted in the IT world) to the network domain.
Even after the containerization of 5G network functions, they are still quite different from traditional IT applications because of strict requirements on the underlying infrastructure. This includes specialized accelerators (SRIOV/DPDK) and network plugins (Multus) to provide the required performance to handle mission-critical, real-time traffic. This requires a careful, segregated network deployment process into various “functional layers” of DevOps functionality that, when executed in the correct order, provides a complete automated deployment that aligns closely with the IT DevOps capabilities.
This post provides a view of how these layers should be managed and implemented across different teams.
The need for DevOps-based 5G network rollout
5G rollout is associated with the below requirements that make it mandatory to brutally automate the deployment and management process (as opposed to the traditional manual processes in earlier technologies such as 4G):
◉ Pace of rollout: 5G networks are deployed at record speeds to achieve coverage and market share.
◉ Public cloud support: Many CSPs use hyperscalers like AWS to host their 5G network functions, which requires automated deployment and lifecycle management.
◉ Hybrid cloud support: Some network functions must be hosted on a private data center, but that also the requires ability to automatically place network functions dynamically.
◉ Multicloud support: In some cases, multiple hyperscalers are necessary to distribute the network.
Evolving standards: New and evolving standards like Open RAN adoption require continuous updates and automated testing.
◉ Growing vendor ecosystems: Open standards and APIs mean many new vendors are developing network functions that require continuous interoperability testing support.
All the above factors require an extremely automated process that can deploy/re-deploy/place/terminate/test 5G network functions on demand. This cannot be achieved with the traditional way of manually deploying and managing network functions.
Four layers to design with DevOps principles
There are four “layers” that must be designed with DevOps processes in mind:
1. Infrastructure: This layer is responsible for the deployment of cloud (private/public) infrastructure to host network functions. This layer will automate the deployment of virtual private cloud, clusters, node groups, security policies, etc. that are required by the network function. This layer will also ensure the correct infrastructure type is selected with the CNIs required by the network function (e.g., SRIOV, Multus, etc.)
2. Application/network function: This layer is responsible for installing network functions on the infrastructure by running helm-type commands and post-install validation scripts. It also takes care of the major upgrades on the network function.
3. Configuration: This layer takes care of any new Day 2 metadata/configuration that must be loaded on the network function. For example, new metadata to be loaded to support slice templates in the Policy Charging Function(PCF).
4. Testing: This layer is responsible for running automated tests against the various functionalities supported by network functions.
Each of the above layers has its own implementation of DevOps toolchains, with a reference provided in the diagram above. Layer 1 and 2 can be further enhanced with a GitOps-based architecture for lights-out management of the application.
Best practices
It is very important that there is a well-defined framework with the scope, dependencies, and ownership of each layer. The following table is our view on how it should be managed:
As you can see, there are dependencies between these pipelines. To make this end-to-end process work efficiently across multiple layers, you need an intent-based orchestration solution that can manage dependencies between various pipelines and perform supported activities in the surrounding CSP ecosystem, such as Slice Inventory and Catalog.
Pipeline | ||||
Infrastructure | Application | Configuration | Testing | |
Scope (Functionality to automate) |
VPC, subnets, EKS cluster, security groups, routes | CNF installation, CNF upgrades | CSP slice templates, CSP RFS templates, releases and bug fixes | Release testing, regression testing |
Phase (Applicable network function lifecycle phase) |
Day 1 (infrastructure setup) | Day 0 (CNF installation), Day 1 (CNF setup) | Day 2+, on-demand | Day 2+, on-demand |
Owner (Who owns development and maintenance of pipeline?) |
IBM/cloud vendor | IBM/SI | IBM/SI | IBM/SI |
Source control (Place where source artifacts are stored. Any change triggers the pipeline, depending on the use case) |
Vendor detailed design | ECR repo (images), Helm package | Code commit (custom code) | Code commit (test data) |
Target integration (How the pipeline will interact during the execution process) | IaaC (e.g., Terraform), AWS APIs | Helm-based | RestConf/APIs | RestConf/APIs |
Dependency between pipelines | None | Infrastructure pipeline completed | Base CNF installed | Base CNF installed, release deployed |
Promotion of different environments | Dev, Test/Pre-prod, Prod | Dev, Test/Pre-prod, Prod | Dev, Test/Pre-prod, Prod | Dev, Test/Pre-prod, Prod |
Telecommunications solutions from IBM
This post provides a framework and approach that, when orchestrated correctly, enables a completely automated DevOps-/GitOps-style deployment of 5G network functions.
In our experience, the deciding factor in the success of such a framework is the selection of a partner with experience and a proven orchestration solution.
Source: ibm.com
0 comments:
Post a Comment