This wiki describes how to deploy the NONRTRIC components within Kubernetes cluster.
NONRTRIC comprises several components,
- Control Panel
- Policy Management Service
- Information Coordinator Service
- Non RT RIC Gateway (reuse of existing kong proxy is also possible)
- R-App catalogue Service
- Enhanced R-App catalogue Service
- A1 Simulator (3 A1 interface versions - previously called Near-RT RIC A1 Interface)
- A1 Controller (currently using SDNC from ONAP)
- Helm Manager
- Dmaap Adapter Service
- Dmaap Mediator Service
- Use Case rApp O-DU Slice Assurance
- Use Case rAPP O-RU Closed loop recovery
- CAPIF core
In the IT/Dep repo, there are helm charts for each these components. In addition, there is a chart called nonrtric, which is a composition of the components above.
- Text editor, e.g.
ChartMuseum to store the HELM charts on the server, multiple options are available:
Download the the it/dep repository. At time of writing there is no branch for h-release, so it may be necessary to clone from master branch.
Configuration of components to install
It is possible to configure which of nonrtric components to install, including the controller and a1 simulators. This configuration is made in the override for the helm package. Edit the following file
The file shown below is a snippet from the override
All parameters beginning with 'install' can be configured 'true' for enabling installation and 'false' for disabling installation.
For the parameters
installKong, only one can be enabled.
There are many other parameters in the file that may require adaptation to fit a certain environment. For example
namespace and port to message router etc. These integration details are not covered in this guide.
There is a script that packs and installs the components by using the
helm command. The installation uses a values override file like the one shown above. This example can be run like this:
RANPM ONLY Installation
The ranpm setup works on linux/MacOS or on windows via WSL using a local or remote kubernetes cluster.
- local kubectl
- kubernetes cluster
- local docker for building images
It is recommended to run the ranpm on a kubernetes cluster instead of local docker-desktop etc as the setup requires a fair amount of computer resouces.
Requirement on kubernetes
The demo set can be run on local or remote kubernetes. Kubectl must be configured to point to the applicable kubernetes instance. Nodeports exposed by the kubernetes instance must be accessible by the local machine - basically the kubernetes control plane IP needs to be accessible from the local machine.
- Latest version of istio installed
- cmd 'envsubst' must be installed (check by cmd: 'type envsubst' )
- cmd 'jq' must be installed (check by cmd: 'type jq' )
The following images need to be built manually. If remote or multi node cluster is used, then an image repo needs to be available to push the built images to. If external repo is used, use the same repo for all built images and configure the reponame in
helm/global-values.yaml (the parameter value of extimagerepo shall have a trailing
Build the following images (build instruction in each dir)
The installation is made by a few scripts. The main part of the ranpm is installed by a single script. Then, additional parts can be added on top. All installations in kubernetes is made by helm charts.
The following scripts are provided for installing (install-nrt.sh mush be installed first):
The kubeconfig file of the local cluster should be aligned to the cluster's control plane node's internal IP
- install-nrt.sh : Installs the main parts of the ranpm setup
- install-pm-log.sh : Installs the producer for influx db
- install-pm-influx-job.sh : Sets up an alternative job to produce data stored in influx db.
- install-pm-rapp.sh : Installs a rapp that subscribe and print out received data
There is a corresponding uninstall script for each install script. However, it is enough to just run
uninstall-nrt.sh and `uninstall-pm-rapp.sh´.
Exposed ports to APIs
All exposed APIs on individual port numbers (nodeporta) on the address of the kubernetes control plane.
Keycloak API accessed via proxy (proxy is needed to make keycloak issue token with the internal address of keycloak).
- nodeport: 31784
OPA rules bundle server
Server for posting updated OPA rules.
- nodeport: 32201
Information coordinator Service
Direct access to ICS API. -nodeports (http and https): 31823, 31824
Direct access to the Ves-Collector
- nodeports (http and https): 31760, 31761
Exposed ports to admin tools
As part of the ranpm installation, a number of admin tools are installed. The tools are accessed via a browser on individual port numbers (nodeports) on the address of the kubernetes control plane.
Keycload admin console
Admin tool for keycloak.
- nodeport : 31788
- user: admin
- password: admin
With this tool the topics, consumer etc can be viewed.
- nodeport: 31767
Browser for minio filestore.
- nodeport: 31768
- user: admin
- password: adminadmin
Browser for influx db.
- nodeport: 31812
- user: admin
- password: mySuP3rS3cr3tT0keN
Result of the installation
The installation will create one helm release and all created kubernetes objects will be put in a namespace. This name is 'nonrtric' and cannot be changed.
Once the installation is done you can check the created kubernetes objects by using command
Example : Deployed pods when all components are enabled:
- After successful installation, control panel shows "
No Type" as policy type as shown below.
- If there is no policy type shown and UI looks like below, then the setup can be investigated with below steps (It could be due to synchronization delay as well, It gets fixed automatically after few minutes)
- Verify the A1 PMS logs to make sure that the connection between A1 PMS and a1controller is successful.
Command to check pms logs
Command to enable debug logs in PMS (Command below should be executed inside k8s pods or the host address needs to be updated with the relevant port forwarding)
Try removing the controller information in specific simulator configuration and verify the simulator are working without a1controller.
curlcan be used in control panel pod.
There is a script that uninstalls installs the NONRTRIC components. It is simply run like this:
Introduction to Helm Charts
In NONRTRIC we use Helm chart as a packaging manager for kubernetes. Helm chart helps developer to package, configure & deploy the application and services into kubernetes environment.
For more information you could refer to below links,