This page describes how to get the release E version of Non-RT RIC up and running locally with three separate Near-RT RIC A1 simulator docker containers providing STD_1.1.3, STD_2.0.0 and OSC_2.1.0 versions of the A1 interface.
All components of the Non-RT RIC run as docker containers and communicate via a private docker network with container ports, ports also available on localhost. Details of the architecture can be found from Release E page.
Docker and docker-compose (latest)
The images used for running the Non-RT RIC can be selected from the table below depending on if the images are built manually (snapshot image) or if staging or release images shall be used.
In general, there is no need to build the images manually unless there are code changes made by the user, so either staging or release images should be used. Instruction on how to build all components, see Release D - Build.
The run commands throughout this page uses the staging images and tags. Replace the staging images/tags in the container run commands in the instructions if release or snapshot images are desired.
Component | Release image and version tag | Staging images and version tag | Manual snapshot (only available if manually built) and version tag |
---|---|---|---|
Policy Management Service | nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-policy-agent:2.3.0 | nexus3.o-ran-sc.org:10004/o-ran-sc/nonrtric-policy-agent:2.3.0 | o-ran-sc/nonrtric-policy-agent:2.3.0-SNAPSHOT |
Near-RT RIC A1 Simulator | nexus3.o-ran-sc.org:10002/o-ran-sc/a1-simulator:2.2.0 | nexus3.o-ran-sc.org:10004/o-ran-sc/a1-simulator:2.2.0 | o-ran-sc/a1-simulator:latest |
Information Coordinator Service | nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-information-coordinator-service:1.2.0 | nexus3.o-ran-sc.org:10004/o-ran-sc/nonrtric-information-coordinator-service:1.2.0 | o-ran-sc/nonrtric-information-coordinator-service:1.2.0-SNAPSHOT |
Non-RT RIC Control Panel | nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-controlpanel:2.3.0 | nexus3.o-ran-sc.org:10004/o-ran-sc/nonrtric-controlpanel:2.3.0 | o-ran-sc/nonrtric-controlpanel:2.3.0-SNAPSHOT |
SDNC A1-Controller | nexus3.onap.org:10002/onap/sdnc-image:2.2.3 | Use release version | Use release version |
Gateway | nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-gateway:1.1.0 | nexus3.o-ran-sc.org:10004/o-ran-sc/nonrtric-gateway:1.1.0 | o-ran-sc/nonrtric-gateway:1.1.0-SNAPSHOT |
App Catalogue Service | nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-r-app-catalogue:1.1.0 | nexus3.o-ran-sc.org:10004/o-ran-sc/nonrtric-r-app-catalogue:1.1.0 | o-ran-sc/nonrtric-r-app-catalogue:1.1.0-SNAPSHOT |
Helm Manager | nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-helm-manager:1.1.0 | nexus3.o-ran-sc.org:10004/o-ran-sc/nonrtric-helm-manager:1.1.0 | o-ran-sc/nonrtric-helm-manager:1.1.0-SNAPSHOT |
Dmaap Mediator Producer | nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-dmaap-mediator-producer:1.0.0 | nexus3.o-ran-sc.org:10004/o-ran-sc/nonrtric-dmaap-mediator-producer:1.0.0 | o-ran-sc/nonrtric-dmaap-mediator-producer:1.0.0-SNAPSHOT |
Dmaap Adaptor Service | nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-dmaap-adaptor:1.0.0 | nexus3.o-ran-sc.org:10004/o-ran-sc/nonrtric-dmaap-adaptor:1.0.0 | o-ran-sc/nonrtric-dmaap-adaptor:1.0.0-SNAPSHOT |
The following ports will be allocated and exposed to localhost for each component. If other port(s) are desired, then the ports need to be replaced in the container run commands in the instructions further below.
Component | Port exposed to localhost (http/https) |
---|---|
Policy Management Service | 8081/8443 |
Near-RT RIC A1 Simulator | 8085/8185, 8086/8186, 8087/8187 |
Information Coordinator Service | 8083/8434 |
Non-RT RIC Control Panel | 8080/8880 |
SDNC A1-Controller | 8282/8443 |
Gateway | 9090 |
App Catalogue Service | 8680/8633 |
Helm Manager | FIXME |
Dmaap Mediator Producer | 9085/9185 |
Dmaap Adaptor Service | 9087/9187 |
The containers need to be connected to docker network in order to communicate with each other.
Create a private docker network. If another network name is used, all references to 'nonrtric-docker-net' in the container run commands below need to be replaced.
docker network create nonrtric-docker-net |
To support local test with three separate Near-RT RIC A1 simulator instances, each running a one of the three available A1 Policy interface versions:
The policy management service can be configure with or without a A-Controller. Choose the appropriate configuration below.
This file is for running without the SDNC A1-Controller
{ "config": { "//description": "Application configuration", "ric": [ { "name": "ric1", "baseUrl": "http://ric1:8085/", "managedElementIds": [ "kista_1", "kista_2" ] }, { "name": "ric2", "baseUrl": "http://ric2:8085/", "managedElementIds": [ "kista_3", "kista_4" ] }, { "name": "ric3", "baseUrl": "http://ric3:8085/", "managedElementIds": [ "kista_5", "kista_6" ] } ] } } |
This file is for running with the SDNC A1-Controller.
{ "config": { "//description": "Application configuration", "controller": [ { "name": "a1controller", "baseUrl": "https://a1controller:8443", "userName": "admin", "password": "Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U" } ], "ric": [ { "name": "ric1", "baseUrl": "http://ric1:8085/", "controller": "a1controller", "managedElementIds": [ "kista_1", "kista_2" ] }, { "name": "ric2", "baseUrl": "http://ric2:8085/", "controller": "a1controller", "managedElementIds": [ "kista_3", "kista_4" ] }, { "name": "ric3", "baseUrl": "http://ric3:8085/", "controller": "a1controller", "managedElementIds": [ "kista_5", "kista_6" ] } ] } } |
To enable the also the optional DMAAP interface, add the following config (same level as the "ric" entry) to application_configuration.json.
Be sure to update http/host/port below to match the configuration of the used message router.
... "streams_publishes": { "dmaap_publisher": { "type": "message-router", "dmaap_info": { "topic_url": "http://dmaap-mr:3904/events/A1-POLICY-AGENT-WRITE" } } }, "streams_subscribes": { "dmaap_subscriber": { "type": "message-router", "dmaap_info": { "topic_url": "http://dmaap-mr:3904/events/A1-POLICY-AGENT-READ/users/policy-agent?timeout=15000&limit=100" } } }, ... |
Start the container with the following command. Replace "<absolute-path-to-file>" with the the path to the created configuration file in the command. The configuration file is mounted to the container. There will be WARN messages appearing in the log until the simulators are started.
docker run --rm -v <absolute-path-to-file>/application_configuration.json:/opt/app/policy-agent/data/application_configuration.json -p 8081:8081 -p 8433:8433 --network=nonrtric-docker-net --name=policy-agent-container nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-policy-agent:2.3.0 |
Wait 1 minute to allow the container to start and to read the configuration. Then run the command below another terminal. The output should match the configuration in the file - all three rics (ric1, ric2 and ric3) should be included in the output. Note that each ric has the state "UNAVAILABLE" until the simulators are started.
Note: If the policy management service is started with config for the SDNC A1 Controller (the second config option), do the steps described in section Run the A1 Controller Docker Container below before proceeding.
curl localhost:8081/a1-policy/v2/rics |
Expected output:
{"rics":[{"ric_id":"ric1","managed_element_ids":["kista_1","kista_2"],"policytype_ids":[],"state":"UNAVAILABLE"},{"ric_id":"ric3","managed_element_ids":["kista_5","kista_6"],"policytype_ids":[],"state":"UNAVAILABLE"},{"ric_id":"ric2","managed_element_ids":["kista_3","kista_4"],"policytype_ids":[],"state":"UNAVAILABLE"}]} |
For troubleshooting/verification purposes you can view/access the full swagger API from url: http://localhost:8081/swagger-ui/index.html?configUrl=/v3/api-docs/swagger-config
This step is only applicable if the configuration for the Policy Management Service include the SDNC A1 Controller (second config option), see Draft - Release E - Run.
Create the docker compose file
version: '3' networks: default: external: name: nonrtric-docker-net services: db: image: nexus3.o-ran-sc.org:10001/mariadb:10.5 container_name: sdncdb networks: - default ports: - "3306" environment: - MYSQL_ROOT_PASSWORD=itsASecret - MYSQL_ROOT_HOST=% - MYSQL_USER=sdnctl - MYSQL_PASSWORD=gamma - MYSQL_DATABASE=sdnctl logging: driver: "json-file" options: max-size: "30m" max-file: "5" a1controller: image: nexus3.onap.org:10002/onap/sdnc-image:2.1.2 depends_on : - db container_name: a1controller networks: - default entrypoint: ["/opt/onap/sdnc/bin/startODL.sh"] ports: - 8282:8181 - 8443:8443 links: - db:dbhost - db:sdnctldb01 - db:sdnctldb02 environment: - MYSQL_ROOT_PASSWORD=itsASecret - MYSQL_USER=sdnctl - MYSQL_PASSWORD=gamma - MYSQL_DATABASE=sdnctl - SDNC_CONFIG_DIR=/opt/onap/sdnc/data/properties - SDNC_BIN=/opt/onap/sdnc/bin - ODL_CERT_DIR=/tmp - ODL_ADMIN_USERNAME=admin - ODL_ADMIN_PASSWORD=Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U - ODL_USER=admin - ODL_PASSWORD=Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U - SDNC_DB_INIT=true - A1_TRUSTSTORE_PASSWORD=a1adapter - AAI_TRUSTSTORE_PASSWORD=changeit logging: driver: "json-file" options: max-size: "30m" max-file: "5" |
Start the SNDC A1 controller with the following command, using the created docker-compose file.
docker-compose up |
Open this url in a web browser to verify that the SDNC A1 Controller is up and running. It may take a few minutes until the url is available.
http://localhost:8282/apidoc/explorer/index.html#/controller%20A1-ADAPTER-API
Username/password: admin/Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U
The Karaf logs of A1 controller can be followed e.g. by using command
docker exec a1controller sh -c "tail -f /opt/opendaylight/data/log/karaf.log" |
Start a simulator for each ric defined in in the application_configuration.json created in Draft - Release E - Run. Each simulator will use one of the currently available A1 interface versions.
Ric1
docker run --rm -p 8085:8085 -p 8185:8185 -e A1_VERSION=OSC_2.1.0 -e ALLOW_HTTP=true --network=nonrtric-docker-net --name=ric1 nexus3.o-ran-sc.org:10002/o-ran-sc/a1-simulator:2.2.3 |
Ric2
docker run --rm -p 8086:8085 -p 8186:8185 -e A1_VERSION=STD_1.1.3 -e ALLOW_HTTP=true --network=nonrtric-docker-net --name=ric2 nexus3.o-ran-sc.org:10002/o-ran-sc/a1-simulator:2.2.0 |
Ric3
docker run --rm -p 8087:8085 -p 8187:8185 -e A1_VERSION=STD_2.0.0 -e ALLOW_HTTP=true --network=nonrtric-docker-net --name=ric3 nexus3.o-ran-sc.org:10002/o-ran-sc/a1-simulator:2.2.0 |
Wait at least one minute to let the policy management service synchronise the rics. Then run the command below another terminal. The output should match the configuration in the file. Note that each ric now has the state "AVAILABLE".
curl localhost:8081/a1-policy/v2/rics |
Expected output - all state should indicated AVAILABLE:
{"rics":[{"ric_id":"ric1","managed_element_ids":["kista_1","kista_2"],"policytype_ids":[],"state":"AVAILABLE"},{"ric_id":"ric3","managed_element_ids":["kista_5","kista_6"],"policytype_ids":[],"state":"AVAILABLE"},{"ric_id":"ric2","managed_element_ids":["kista_3","kista_4"],"policytype_ids":[""],"state":"AVAILABLE"}]} |
The simulators using version STD_2.0.0 and OSC_2.1.0 supports policy types. Run the commands below to add one policy types in ric1 and ric3.
Create the file with policy type for ric1
{ "name": "pt1", "description": "pt1 policy type", "policy_type_id": 1, "create_schema": { "$schema": "http://json-schema.org/draft-07/schema#", "title": "OSC_PT1_0.1.0", "description": "QoS policy type", "type": "object", "properties": { "scope": { "type": "object", "properties": { "ueId": { "type": "string" }, "qosId": { "type": "string" } }, "additionalProperties": false, "required": [ "ueId", "qosId" ] }, "statement": { "type": "object", "properties": { "priorityLevel": { "type": "number" } }, "additionalProperties": false, "required": [ "priorityLevel" ] } } } } |
Put the policy type to ric1
curl -X PUT -v "http://localhost:8085/a1-p/policytypes/123" -H "accept: application/json" \ -H "Content-Type: application/json" --data-binary @osc_pt1.json |
Create the file with policy type for ric3
{ "policySchema": { "$schema": "http://json-schema.org/draft-07/schema#", "title": "STD_QOS_0_2_0", "description": "STD QOS policy type", "type": "object", "properties": { "scope": { "type": "object", "properties": { "ueId": { "type": "string" }, "qosId": { "type": "string" } }, "additionalProperties": false, "required": [ "ueId", "qosId" ] }, "qosObjectives": { "type": "object", "properties": { "priorityLevel": { "type": "number" } }, "additionalProperties": false, "required": [ "priorityLevel" ] } } }, "statusSchema": { "$schema": "http://json-schema.org/draft-07/schema#", "title": "STD_QOS_0.2.0", "description": "STD QOS policy type status", "type": "object", "properties": { "enforceStatus": { "type": "string" }, "enforceReason": { "type": "string" }, "additionalProperties": false, "required": [ "enforceStatus" ] } } } |
Put the policy type to ric3
curl -X PUT -v "http://localhost:8087/policytype?id=std_pt1" -H "accept: application/json" -H "Content-Type: application/json" --data-binary @std_pt1.json |
Wait one minute to let the policy management service synchronise the types with the simulators.
List the synchronised types.
curl localhost:8081/a1-policy/v2/policy-types |
Expected output:
{"policytype_ids":["","123","std_pt1"]} |
Run the following command to start the information coordinator service.
docker run --rm -p 8083:8083 -p 8434:8434 --network=nonrtric-docker-net --name=information-service-container nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-information-coordinator-service:1.2.0 |
Verify that the Information Coordinator Service is started and responding (response is an empty array).
curl localhost:8083/data-producer/v1/info-types |
Expected output:
[ ] |
For troubleshooting/verification purposes you can view/access the full swagger API from url: http://localhost:8083/swagger-ui/index.html?configUrl=/v3/api-docs/swagger-config
The Gateway exposes the interfaces of the Policy Management Service and the Inform Coordinator Service to a single port of the gateway. This single port is then used by the control panel to access both services.
Create the config file for the gateway.
server: port: 9090 spring: cloud: gateway: httpclient: ssl: useInsecureTrustManager: true wiretap: true httpserver: wiretap: true routes: - id: A1-Policy uri: https://policy-agent-container:8433 predicates: - Path=/a1-policy/** - id: A1-EI uri: https://information-service-container:8434 predicates: - Path=/data-producer/** management: endpoint: gateway: enabled: true endpoints: web: exposure: include: "gateway,loggers,logfile,health,info,metrics,threaddump,heapdump" logging: level: ROOT: ERROR org.springframework: ERROR org.springframework.cloud.gateway: INFO reactor.netty: INFO file: name: /var/log/nonrtric-gateway/application.log |
Run the following command to start the gateway. Replace "<absolute-path-to-file>" with the the path to the created application.yaml.
docker run --rm -v <absolute-path-to-config-file>/application.yaml:/opt/app/nonrtric-gateway/config/application.yaml -p 9090:9090 --network=nonrtric-docker-net --name=nonrtric-gateway nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-gateway:1.1.0 |
Run the following two commands to check that the services can be reached through the gateway
curl localhost:9090/a1-policy/v2/rics |
Expected output
{"rics":[{"ric_id":"ric1","managed_element_ids":["kista_1","kista_2"],"policytype_ids":["123"],"state":"AVAILABLE"},{"ric_id":"ric3","managed_element_ids":["kista_5","kista_6"],"policytype_ids":["std_pt1"],"state":"AVAILABLE"},{"ric_id":"ric2","managed_element_ids":["kista_3","kista_4"],"policytype_ids":[""],"state":"AVAILABLE"}]} |
Second command:
curl localhost:9090/data-producer/v1/info-types |
Expected output:
[ ] |
Create the config file for the control panel.
events{} http { include /etc/nginx/mime.types; resolver 127.0.0.11; server { listen 8080; server_name localhost; root /usr/share/nginx/html; index index.html; location /a1-policy/ { set $upstream nonrtric-gateway; proxy_pass http://$upstream:9090; } location /data-producer/ { set $upstream nonrtric-gateway; proxy_pass http://$upstream:9090; } location / { try_files $uri $uri/ /index.html; } } } |
Run the following command to start the control panel. Replace "<absolute-path-to-file>" with the the path to the created nginx.conf.
docker run --rm -v <absolute-path-to-config-file>/nginx.conf:/etc/nginx/nginx.conf -p 8080:8080 --network=nonrtric-docker-net --name=control-panel nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-controlpanel:2.3.0 |
The webbased UI can be accessed by pointing the web-browser to this URL:
http://localhost:8080/
Start the App Catalogue Service by the following command.
docker run --rm -p 8680:8680 -p 8633:8633 --network=nonrtric-docker-net --name=rapp-catalogue-service nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-r-app-catalogue:1.1.0 |
Verify that the service is up and running
curl localhost:8680/services |
Expected output:
[ ] |
Note: Access to kubernetes is required as stated the requirements on the top of this page.
Change dir to 'helm-manger' in the downloaded nonrtric repo
$ cd <path-repos>/nonrtric/helm-manager |
Start the helm manger in a separate shell by the following command:
docker run \ --rm \ -it \ -p 8112:8083 \ --name helmmanagerservice \ --network nonrtric-docker-net \ -v $(pwd)/mnt/database:/var/helm-manager/database \ -v ~/.kube:/root/.kube \ -v ~/.helm:/root/.helm \ -v ~/.config/helm:/root/.config/helm \ -v ~/.cache/helm:/root/.cache/helm \ -v $(pwd)/config/KubernetesParticipantConfig.json:/opt/app/helm-manager/src/main/resources/config/KubernetesParticipantConfig.json \ -v $(pwd)/config/application.yaml:/opt/app/helm-manager/src/main/resources/config/application.yaml \ nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-helm-manager:1.1.0 |
Make sure the app has started by listing the current charts - response should be empty json array.
$ curl localhost:8112/helm/charts {"charts":[]} |
To test the app further, start a helm chart repo and create a dummy helm chart
Start a. chartmuseum chart repository in a separate shell
$ docker run --rm -it \ -p 8222:8080 \ --name chartmuseum \ --network nonrtric-docker-net \ -e DEBUG=1 \ -e STORAGE=local \ -e STORAGE_LOCAL_ROOTDIR=/charts \ -v $(pwd)/charts:/charts \ ghcr.io/helm/chartmuseum:v0.13.1 |
Add the chart repo to the helm manager by the following command:
$ docker exec -it helmmanagerservice sh # helm repo add cm http://chartmuseum:8080$ helm package simple-app"cm" has been added to your repositories $ exit |
Create a dummy helm chart for test and package the chart
$ helm create simple-app Creating simple-app$ helm package simple-appSuccessfully packaged chart and saved it to: <path-in-current-dir>/helm-manager/tmp |
The commands below show examples of operations towards the helm manager using the dummy chart:
// Get charts $ curl -sw %{http_code} http://localhost:8112/helm/charts {"charts":[]}200 // Onboard the chart curl -sw %{http_code} http://localhost:8112/helm/charts -X POST -F chart=@simple-app-0.1.0.tgz -F values=@simple-app-values.yaml -F "info=<simple-app.json" Curl OK Response: 200 Body: // List the chart(s) $ curl -sw %{http_code} http://localhost:8112/helm/charts {"charts":[{"releaseName":"simpleapp","chartName":"simple-app","version":"0.1.0","namespace":"ckhm","repository":"cm"}]}200 // Install the chart (app) - note: the namespace ckhm is created in kubernetes $curl -sw %{http_code} http://localhost:8112/helm/install -X POST -H Content-Type:application/json -d @simple-app-installation.json 201 ===================== Get apps - simple-app ===================== $curl -sw %{http_code} http://localhost:8112/helm/charts Curl OK Response: 200 Body: {"charts":[{"releaseName":"simpleapp","chartName":"simple-app","version":"0.1.0","namespace":"ckhm","repository":"cm"}]} //List the installed chart using helm ls // helm ls -A NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION simpleapp ckhm 1 2021-12-02 20:53:21.52883025 +0000 UTC deployed simple-app-0.1.0 1.16.0 // List the service and pods in kubernetes - may take a few seconds until the object are created in kubernetes $ kubectl get svc -n ckhm NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE simpleapp-simple-app ClusterIP 10.107.86.92 <none> 80/TCP 30s $ kubectl get pods -n ckhm NAME READY STATUS RESTARTS AGE simpleapp-simple-app-675f44fc99-wpd6g 1/1 Running 0 31s |
To run in the helm manager in kubernetes see this page: Build/Run Helm Manager