Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

All components of the Non-RT RIC run as docker containers and communicate via a private docker network with container ports, ports also available on localhost. Details of the architecture can be found from Release E page.

Table of Contents

Project Requirements

  • Docker and docker-compose (latest)

  • kubectl with admin access to kubernetes (minikube, docker-desktop kubernetes etc)  -  this is only applicable when running the Helm Manager
  • helm with access to kubernetes - this is only applicable when running the Helm Manager example operations

Images

The images used for running the Non-RT RIC can be selected from the table below depending on if the images are built manually (snapshot image) or if staging or release images shall be used.

In general, there is no need to build the images manually unless there are code changes made by the user, so either staging or release images should be used. Instruction on how to build all components, see. Draft - Release E - Build.


The run commands throughout this page uses the release images and tags. Replace the release images/tags in the container run commands in the instructions if staging or snapshot images are desired. 


2.2.0.org:10004/o-ran-sc0
Component
(components marked with * is not released in E)
Release image and version tagStaging images and version tagManual snapshot (only available if manually built)
and version tag
Policy Management Service

nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-a1-policy-management-agentservice:2.3.01

nexus3.o-ran-sc.org:10004/o-ran-sc/nonrtricnonrtric-a1-policy-agent:2.3.0o-ran-sc/nonrtric-policy-agentmanagement-service:2.3.01-SNAPSHOT

Near-RT RIC A1 Simulator

nexus3.o-ran-sc.org:10002/o-ran-sc/a1-simulator:2.2.0nexus3.o-ran-sc.org:10004/

o-ran-sc/a1-simulator:

o-ran-sc/a1-simulator:latest

Information Coordinator Service

nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-information-coordinator-service:1.2.01nexus3.

o-ran-sc

/nonrtric-information-coordinator-service:1.2.

1-SNAPSHOT

o-ran-sc/nonrtric-information-coordinator-service:1.2.0-SNAPSHOT

Non-Non-RT RIC Control Panel

nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-controlpanel:2.3.0nexus3.o-ran-sc.org:10004/o-ran-sc/nonrtric-controlpanel:2.3.0

o-ran-sc/nonrtric-controlpanel:2.3.0-SNAPSHOT

SDNC A1-Controller

nexus3.onap.org:10002/onap/sdnc-image:2.2.3

Use release versionUse release version
Gateway*

nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-gateway:1.1.0

nexus3

.

o-ran-sc.org:10004/o-ran-sc/nonrtric-gateway:1.1.

0

o-ran-sc/nonrtric-gateway:1.1.0-SNAPSHOT
App Catalogue Service*

nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-r-app-catalogue:1.0.2

o-ran-sc/nonrtric-r-app-catalogue:1.0.2-SNAPSHOT
Helm Manager

nexus3.o-ran-sc.org:

10004

10002/o-ran-sc/nonrtric-

r

helm-

app-catalogue

manager:1.1.

0

1

o-ran-sc/nonrtric-r-app-catalogue:1.1.0-SNAPSHOTHelm Managernexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-helm-manager:1.1.0nexus3.o-ran-sc.org:10004/o-ran-sc/nonrtric-helm-manager:1.1.0o-ran-sc/nonrtric-helm-manager:1.1.0-SNAPSHOT
Dmaap Mediator Producer

nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-dmaap-mediator-producer:1.0.01

Not applicable (Set as parameter for docker build)
Dmaap Adaptor Service

nexus3.o-ran-sc.org:

10004

10002/o-ran-sc/nonrtric-dmaap-

mediator-producer

adaptor:1.0.

0

1

o-ran-sc/nonrtric-dmaap-mediator-produceradaptor:1.0.01-SNAPSHOT
Dmaap Adaptor Service

nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-dmaap-adaptor:1.0.0

nexus3.o-ran-sc.org:10004/o-ran-sc/nonrtric-dmaap-adaptor:1.0.0o-ran-sc/nonrtric-dmaap-adaptor:1.0.0-SNAPSHOT

Ports

(*) Note: For images not released in E (components marked with *) the snap shot images built manually will get an image tag of one step above the release imaged tag. 

Note: A version of this table appears Integration&Testing - E Release - E Release Docker Image List - NONRTRIC (E-Release). This is the authoritive version!

Ports

The following ports will be allocated and exposed to localhost for each component. If other port(s) are desired, then the ports need to be replaced in the container run commands in the instructions further below.

...

ComponentPort exposed to localhost (http/https)
A1 Policy Management Service

8081/8443

Near-RT RIC A1 Simulator

8085/8185,  8086/8186, 8087/8187

Information Coordinator Service

8083/8434

Non-RT RIC Control Panel

8080/8880

SDNC A1-Controller

8282/8443

Gateway9090 (only http)
App Catalogue Service8680/8633
Helm Manager8112 (only http)
Dmaap Mediator Producer9085/9185
Dmaap Adaptor Service

9087/9187

Note: A version of this table appears Integration&Testing - E Release - E Release Docker Image List - NONRTRIC (E-Release). This is the authoritive version!

Prerequisites

The containers need to be connected to docker network in order to communicate with each other.

...

Code Block
languagebash
docker network create nonrtric-docker-net

Run

...

the A1 Policy Management Service Docker Container

To support local test with three separate Near-RT RIC A1 simulator instances, each running a one of the three available A1 Policy interface versions:  

  • Create an application_configuration.json file with the configuration below.  This will configure the policy management service to use the simulators for the A1 interface
  • Note: Any defined ric names must match the given docker container names in near-RT RIC simulator startup, see Run the Near-RT RIC A1 Simulator Docker Containers
  • The application supports both REST and DMAAP interface. REST is always enabled but to enable DMAAP (message exchange via message-router) additional config is needed. The examples below uses REST over http.

...

Code Block
languageyml
titleapplication_configuration.json
{
   "config": {
      "//description": "Application configuration",
      "ric": [
         {
            "name": "ric1",
            "baseUrl": "http://ric1:8085/",
            "managedElementIds": [
               "kista_1",
               "kista_2"
            ]
         },
         {
            "name": "ric2",
            "baseUrl": "http://ric2:8085/",
            "managedElementIds": [
               "kista_3",
               "kista_4"
            ]
         },
         {
            "name": "ric3",
            "baseUrl": "http://ric3:8085/",
            "managedElementIds": [
               "kista_5",
               "kista_6"
            ]
         }
      ]
   }
}

...

Code Block
languageyml
titleapplication_configuration.json
{
   "config": {
      "//description": "Application configuration",
       "controller": [
	     {
             "name": "a1controller",
             "baseUrl": "https://a1controller:8443",
             "userName": "admin",
             "password": "Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U"
         }
      ],
      "ric": [
         {
            "name": "ric1",
            "baseUrl": "http://ric1:8085/",
            "controller": "a1controller",
            "managedElementIds": [
               "kista_1",
               "kista_2"
            ]
         },
         {
            "name": "ric2",
            "baseUrl": "http://ric2:8085/",
            "controller": "a1controller",
            "managedElementIds": [
               "kista_3",
               "kista_4"
            ]
         },
         {
            "name": "ric3",
            "baseUrl": "http://ric3:8085/",
            "controller": "a1controller",
            "managedElementIds": [
               "kista_5",
               "kista_6"
            ]
         }
      ]
   }
}

...

Code Block
languagebash
docker run --rm -v <absolute-path-to-file>/application_configuration.json:/opt/app/policy-agent/data/application_configuration.json -p 8081:8081 -p 8433:8433 --network=nonrtric-docker-net --name=policy-agent-container nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-a1-policy-management-agentservice:2.3.01

Wait 1 minute to allow the container to start and to read the configuration. Then run the command below another terminal. The output should match the configuration in the file - all three rics (ric1, ric2 and ric3) should be included in the output. Note that each ric has the state "UNAVAILABLE" until the simulators are started.

Note: If the policy management service is started with config for the SDNC A1 Controller (the second config option), do the steps described in section Run the A1 Controller Docker Container below before proceeding.

Code Block
languagebash
curl localhost:8081/a1-policy/v2/rics

Expected output (not that all simulators - ric1,ric2 and ric3 will indicate "state":"UNAVAILABLE" until the simulators has been started in FIXME Run the Near-RT RIC A1 Simulator Docker Containers):

Code Block
languagebash
{"rics":[{"ric_id":"ric1","managed_element_ids":["kista_1","kista_2"],"policytype_ids":[],"state":"UNAVAILABLE"},{"ric_id":"ric3","managed_element_ids":["kista_5","kista_6"],"policytype_ids":[],"state":"UNAVAILABLE"},{"ric_id":"ric2","managed_element_ids":["kista_3","kista_4"],"policytype_ids":[],"state":"UNAVAILABLE"}]}

For troubleshooting/verification purposes you can view/access the full swagger API from url: http://localhost:8081/swagger-ui/index.html?configUrl=/v3/api-docs/swagger-config


Run the SDNC A1 Controller Docker Container (ONAP SDNC)

This step is only applicable if the configuration for the Policy Management Service include the SDNC A1 Controller (second config option), see Draft - Release E - Run .the Policy Management Service Docker Container.

Create the docker Create the docker compose file - be sure to update image for the a1controller to the one listed for SDNC A1 Controller in the table on the top of this page.

...

Code Block
languagebash
docker exec a1controller sh -c "tail -f /opt/opendaylight/data/log/karaf.log"

Run the Near-RT RIC A1 Simulator Docker Containers

Start a simulator for each ric defined in in the application_configuration.json created in Draft - Release E - Run Run the Policy Management Service Docker Container. Each simulator will use one of the currently available A1 interface versions.

...

Code Block
languagebash
{"policytype_ids":["","123","std_pt1"]}

Run the

...

Information Coordinator Service Docker Container

Run the following command to start the information coordinator service.

Code Block
languagebash
docker run --rm -p 8083:8083 -p 8434:8434 --network=nonrtric-docker-net --name=information-service-container nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-information-coordinator-service:1.2.01


Verify that the Information Coordinator Service is started and responding (response is an empty array).

...

For troubleshooting/verification purposes you can view/access the full swagger API from url: http://localhost:8083/swagger-ui/index.html?configUrl=/v3/api-docs/swagger-config

Run the Non-RT RIC Gateway and Control Panel Docker Container

The Gateway exposes the interfaces of the Policy Management Service and the Inform Coordinator Service to a single port of the gateway. This single port is then used by the control panel to access both services.

...

Code Block
languagebash
docker run --rm -v <absolute-path-to-config-file>/application.yaml:/opt/app/nonrtric-gateway/config/application.yaml -p 9090:9090 --network=nonrtric-docker-net --name=nonrtric-gateway  nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-gateway:1.10.0


Run the following two commands to check that the services can be reached through the gateway

...

The webbased UI can be accessed by pointing the web-browser to this URL: 

http://localhost:8080/

Run the App Catalogue Service Docker Container

Start the App Catalogue Service by the following command.

Code Block
languagebash
docker run --rm -p 8680:8680 -p 8633:8633 --network=nonrtric-docker-net --name=rapp-catalogue-service nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-r-app-catalogue:1.10.02


Verify that the service is up and running

...

Code Block
languagebash
[ ]


Run the Helm Manager Docker Container

Note: Access to kubernetes is required as stated the requirements on the top of this page.

...

Code Block
languagebash
docker run \
    --rm  \
    -it \
    -p 8112:8083  \
    --name helmmanagerservice \
    --network nonrtric-docker-net \
    -v $(pwd)/mnt/database:/var/helm-manager/database-service \
    -v ~/.kube:/root/.kube \
    -v ~/.helm:/root/.helm \
    -v ~/.config/helm:/root/.config/helm \
    -v ~/.cache/helm:/root/.cache/helm \
    -v $(pwd)/config/KubernetesParticipantConfig.json:/opt/app/helm-manager/src/main/resources/config/KubernetesParticipantConfig.json \
    -v $(pwd)/config/application.yaml:/optetc/app/helm-manager/src/main/resources/config/application.yaml \
    nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-helm-manager:1.1.01

Make sure the app has started by listing the current charts - response should be empty json array. 

...

Code Block
languagebash
$ docker exec -it helmmanagerservice sh
# helm repo add cm http://chartmuseum:8080
helm package simple-app"cm" has been added to your repositories
$ exit

Create a dummy helm chart for test and package the chart chart, and save this chart in chartmuserum:

Code Block
languagebash
$ helm create simple-app
Creating simple-app$ helm package simple-appSuccessfully packaged chart and saved it to: <path-in-current-dir>/helm-manager/tmp
$ helm package simple-app
Successfully packaged chart and saved it to: <path>/simple-app-0.1.0.tgz

$ curl --data-binary @simple-app-0.1.0.tgz -X POST http://localhost:8222/api/charts

The The commands below show examples of operations towards the helm manager using the dummy chart.

As an alternative, run the script 'test.sh' to execute a full sequence of commands.

Code Block
languagebash
// Get charts
$ Start test
================
Get apps - empty
================
curl -sw %{http_code} http://localhosthelmadmin:itisasecret@localhost:8112/helm/charts
 Curl OK
  Response: 200
  Body: {"charts":[]}200

// Onboard the chart
$ 

================
Add repo
================
curl -sw %{http_code} http://localhosthelmadmin:itisasecret@localhost:8112/helm/chartsrepo -X POST -FH chart=@simple-app-0.1.0.tgz -F values=@simple-app-values.yaml -F "info=<simple-app.json"Content-Type:application/json -d @cm-repo.json
 Curl OK
  Response: 200201
  Body: 

// List the chart(s)
$ curl ============
Onboard app
===========
curl -sw %{http_code} http://localhosthelmadmin:itisasecret@localhost:8112/helm/charts
{"charts":[{"releaseName":"simpleapp","chartName":"simple-app","version":"0.1.0","namespace":"ckhm","repository":"cm"}]}200

// Install the chart (app) - note: the namespace ckhm is created in kubernetes
$ curl -sw %{http_code} http://localhost:8112/helm/install -X POST -H Content-Type:application/json -d @simple-app-installation.json
201 /onboard/chart -X POST -F chart=@simple-app-0.1.0.tgz -F values=@simple-app-values.yaml -F info=<simple-app.json
 Curl OK
  Response: 200
  Body: 

=====================
Get apps - simple-app
=====================
$ curl -sw %{http_code} http://localhosthelmadmin:itisasecret@localhost:8112/helm/charts
 Curl OK
  Response: 200
  Body: {"charts":[{"releaseName":"simpleapp","chartId":{"chartNamename":"simple-app","version":"0.1.0"},"namespace":"ckhm","repository":{"repoName":"cm"}]}

//List the installed chart using helm ls
// helm ls -A
NAME     	NAMESPACE	REVISION	UPDATED                               	STATUS  	CHART           	APP VERSION
simpleapp	ckhm     	1       	2021-12-02 20:53:21.52883025 +0000 UTC	deployed	simple-app-0.1.0	1.16.0     

// List the service and pods in kubernetes - may take a few seconds until the object are created in kubernetes
$ kubectl get svc -n ckhm
NAME     ,"protocol":null,"address":null,"port":null,"userName":null,"password":null},"overrideParams":null}]}

===========
Install app
===========
curl -sw %{http_code} http://helmadmin:itisasecret@localhost:8112/helm/install -X POST -H Content-Type:application/json -d @simple-app-installation.json
 Curl OK
  Response: 201
  Body: 

=====================
Get apps - simple-app
=====================
curl -sw %{http_code} http://helmadmin:itisasecret@localhost:8112/helm/charts
 Curl OK
  Response: 200
  Body: {"charts":[{"releaseName":"simpleapp","chartId":{"name":"simple-app","version":"0.1.0"},"namespace":"ckhm","repository":{"repoName":"cm","protocol":null,"address":null,"port":null,"userName":null,"password":null},"overrideParams":null}]}

=================================================================
helm ls to list installed app - simpleapp chart should be visible
=================================================================
NAME     	NAMESPACE	REVISION	UPDATED              TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
simpleapp-simple-app 	STATUS  	CHART  ClusterIP   10.107.86.92   <none>   	APP VERSION
simpleapp	ckhm    80/TCP 	1   30s
$ kubectl get pods 	2021-12-n ckhm
NAME  14 10:14:30.917334268 +0000 UTC	deployed	simple-app-0.1.0	1.16.0     

==========================================
sleep 30 - give the app some time                      READY   STATUS    RESTARTS   AGE
simpleapp-simple-app-675f44fc99-wpd6g   1/1     Running   0          31s


to start
==========================================
============================
List svc and  pod of the app
============================
NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
simpleapp-simple-app   ClusterIP   10.96.30.250   <none>        80/TCP    30s
NAME                                    READY   STATUS    RESTARTS   AGE
simpleapp-simple-app-675f44fc99-mpvnd   1/1     Running   0          31s

========================
Uninstall app simple-app
========================
curl -sw %{http_code} http://helmadmin:itisasecret@localhost:8112/helm/uninstall/simple-app/0.1.0 -X DELETE
 Curl OK
  Response: 204
  Body: 

===========================================
sleep 30 - give the app some time to remove
===========================================
============================================================
List svc and  pod of the app - should be gone or terminating
============================================================
No resources found in ckhm namespace.
No resources found in ckhm namespace.

=====================
Get apps - simple-app
=====================
curl -sw %{http_code} http://helmadmin:itisasecret@localhost:8112/helm/charts
 Curl OK
  Response: 200
  Body: {"charts":[{"releaseName":"simpleapp","chartId":{"name":"simple-app","version":"0.1.0"},"namespace":"ckhm","repository":{"repoName":"cm","protocol":null,"address":null,"port":null,"userName":null,"password":null},"overrideParams":null}]}

============
Delete chart
===========
curl -sw %{http_code} http://helmadmin:itisasecret@localhost:8112/helm/chart/simple-app/0.1.0 -X DELETE
 Curl OK
  Response: 204
  Body: 

================
Get apps - empty
================
curl -sw %{http_code} http://helmadmin:itisasecret@localhost:8112/helm/charts
 Curl OK
  Response: 200
  Body: {"charts":[]}

Test result  All tests ok 
End of test

To run in the helm manager in kubernetes see this page:  Run Helm Manager in kubernetes


Run the Dmaap Adaptor Service Docker Container

The Dmaap Adaptor Service needs two configurations files, one for the application specific parameters and one for the types the application supports.

...

Code Block
languagebash
titleapplication.yaml
collapsetrue
spring:
  profiles:
    active: prod
  main:
    allow-bean-definition-overriding: true
  aop:
    auto: false
management:
  endpoints:
    web:
      exposure:
        # Enabling of springboot actuator features. See springboot documentation.
        include: "loggers,logfile,health,info,metrics,threaddump,heapdump"
springdoc:
  show-actuator: true
logging:
  # Configuration of logging
  level:
    ROOT: ERROR
    org.springframework: ERROR
    org.springframework.data: ERROR
    org.springframework.web.reactive.function.client.ExchangeFunctions: ERROR
    org.oran.dmaapadapter: INFO
  file:
    name: /var/log/dmaap-adaptor-service/application.log
server:
   # Configuration of the HTTP/REST server. The parameters are defined and handeled by the springboot framework.
   # See springboot documentation.
   port : 8435
   http-port: 8084
   ssl:
      key-store-type: JKS
      key-store-password: policy_agent
      key-store: /opt/app/dmaap-adaptor-service/etc/cert/keystore.jks
      key-password: policy_agent
      key-alias: policy_agent
app:
  webclient:
    # Configuration of the trust store used for the HTTP client (outgoing requests)
    # The file location and the password for the truststore is only relevant if trust-store-used == true
    # Note that the same keystore as for the server is used.
    trust-store-used: false
    trust-store-password: policy_agent
    trust-store: /opt/app/dmaap-adaptor-service/etc/cert/truststore.jks
    # Configuration of usage of HTTP Proxy for the southbound accesses.
    # The HTTP proxy (if configured) will only be used for accessing NearRT RIC:s
    http.proxy-host: 
    http.proxy-port: 0
  ics-base-url: https://information-service-container:8434
  # Location of the component configuration file. The file will only be used if the Consul database is not used;
  # configuration from the Consul will override the file.
  configuration-filepath: /opt/app/dmaap-adaptor-service/data/application_configuration.json
  dmaap-base-url: https://message-router:3905
  # The url used to adressaddress this component. This is used as a callback url sent to other components.
  dmaap-adapter-base-url: https://dmaapadapterservice:8435
  # KAFKA boostrap server. This is only needed if there are Information Types that uses a kafkaInputTopic
  kafka:
    bootstrap-servers: message-router-kafka:9092

...

Code Block
languagebash
docker run --rm \
-v \
<absolute-path-to-config-file>/application.yaml:/opt/app/dmaap-adaptor-service/config/application.yaml \
-v <absolute-path-to-config-file>/application_configuration.json:/opt/app/dmaap-adaptor-service/data/application_configuration.json \
-p 9086:8084 -p 9087:8435 --network=nonrtric-docker-net --name=dmaapadapterservice  nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-dmaap-adaptor:1.0.01

Setup jobs to produce data according to the types in application_configuration.json

Create a file job1.json with the job definition (replace paths paths <url-for-jod-data-delivery> and <url-for-jod-status-delivery> to fit your enviromentenvironment:

Code Block
languagebash
titlejob1.json
{
  "info_type_id": "ExampleInformationType",
  "job_result_uri": "<url-for-jod-data-delivery>",
  "job_owner": "job1owner",
  "status_notification_uri": "<url-for-jod-status-delivery>",
  "job_definition": {}
}

...

If the kafka type also used, setup a job for that type as well.

Create a file job2.json with the the job definition (replace paths <url-for-jod-data-delivery> and <url-for-jod-status-delivery> to fit your enviroment:

Code Block
languagebash
titlejob1job2.json
{
  "info_type_id": "ExampleInformationTypeKafka",
  "job_result_uri": "<url-for-jod-data-delivery>",
  "job_owner": "job1owner",
  "status_notification_uri": "<url-for-jod-status-delivery>",
  "job_definition": {}
}

...

Data posted on the kafka topic unauthenticated.dmaapadp_kafka.text will be delivered to the path as specified specified in the job1job2.json.

Run the Dmaap Mediator Producer Docker Container

The Dmaap Mediator Producer needs one configuration file for the types the application supports.

...

Code Block
languagebash
docker run --rm -v \
<absolute-path-to-config-file>/type_config.json:/configs/type_config.json \
-p 8085:8085 -p 8185:8185 --network=nonrtric-docker-net --name=dmaapmediatorservice \
-e "INFO_COORD_ADDR=https://informationservice:8434" \
-e "DMAAP_MR_ADDR=https://message-router:3905" \
-e "LOG_LEVEL=Debug" \
-e "INFO_PRODUCER_HOST=https://dmaapmediatorservice" \
-e "INFO_PRODUCER_PORT=8185" \
nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-dmaap-mediator-producer:1.0.01

Setup jobs to produce data according to the types in type_config.json

...

Code Block
languagebash
titlejob3.json
{
  "info_type_id": "STD_Fault_Messages",
  "job_result_uri": "<url-for-jodjob-data-delivery>",
  "job_owner": "job3owner",
  "status_notification_uri": "<url-for-jodjob-status-delivery>",
  "job_definition": {}
}

Create job1 for Create job3 for type 'ExampleInformationType'

...

Data posted on the message router topic unauthenticated.dmaapmed.json will be delivered to the path as specified in the jo2job3.json

Run usecases

Within NON-RT RIC a number of usecase implementations are provided. Follow the links below to see how to run them.