Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The run commands throughout this page uses the staging release images and tags. Replace the staging release images/tags in the container run commands in the instructions if release staging or snapshot images are desired. 

...

ComponentPort exposed to localhost (http/https)
Policy Management Service

8081/8443

Near-RT RIC A1 Simulator

8085/8185,  8086/8186, 8087/8187

Information Coordinator Service

8083/8434

Non-RT RIC Control Panel

8080/8880

SDNC A1-Controller

8282/8443

Gateway9090
App Catalogue Service8680/8633
Helm ManagerFIXME8112
Dmaap Mediator Producer9085/9185
Dmaap Adaptor Service9087/9187

...

Create a private docker network. If another network name is used, all references to 'nonrtric-docker-net' in the container run commands below need to be replaced.

Code Block
theme
languagebashMidnight
docker network create nonrtric-docker-net

...

Start the container with the following command. Replace "<absolute-path-to-file>" with the the path to the created configuration file in the command. The configuration file is mounted to the container. There will be WARN messages appearing in the log until the simulators are started.

Code Block
languagebashthemeMidnight
docker run --rm -v <absolute-path-to-file>/application_configuration.json:/opt/app/policy-agent/data/application_configuration.json -p 8081:8081 -p 8433:8433 --network=nonrtric-docker-net --name=policy-agent-container nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-policy-agent:2.3.0

...

Note: If the policy management service is started with config for the SDNC A1 Controller (the second config option), do the steps described in section Run the A1 Controller Docker Container below before proceeding.

Code Block
theme
languagebashMidnight
curl localhost:8081/a1-policy/v2/rics

Expected output:

Code Block
languagebashthemeMidnight
{"rics":[{"ric_id":"ric1","managed_element_ids":["kista_1","kista_2"],"policytype_ids":[],"state":"UNAVAILABLE"},{"ric_id":"ric3","managed_element_ids":["kista_5","kista_6"],"policytype_ids":[],"state":"UNAVAILABLE"},{"ric_id":"ric2","managed_element_ids":["kista_3","kista_4"],"policytype_ids":[],"state":"UNAVAILABLE"}]}

...

Start the SNDC A1 controller with the following command, using the created docker-compose file.

Code Block
languagebashthemeMidnight
docker-compose up


Open this url in a web browser to verify that the SDNC A1 Controller is up and running. It may take a few minutes until the url is available.

...

Ric1

Code Block
languagebashthemeMidnight
docker run --rm -p 8085:8085 -p 8185:8185 -e A1_VERSION=OSC_2.1.0 -e ALLOW_HTTP=true --network=nonrtric-docker-net --name=ric1 nexus3.o-ran-sc.org:10002/o-ran-sc/a1-simulator:2.2.3

Ric2

Code Block
languagebashthemeMidnight
docker run --rm -p 8086:8085 -p 8186:8185 -e A1_VERSION=STD_1.1.3 -e ALLOW_HTTP=true --network=nonrtric-docker-net --name=ric2 nexus3.o-ran-sc.org:10002/o-ran-sc/a1-simulator:2.2.0

Ric3

Code Block
theme
languagebashMidnight
docker run --rm -p 8087:8085 -p 8187:8185 -e A1_VERSION=STD_2.0.0 -e ALLOW_HTTP=true --network=nonrtric-docker-net --name=ric3 nexus3.o-ran-sc.org:10002/o-ran-sc/a1-simulator:2.2.0

...

Wait at least one minute to let the policy management service synchronise the rics. Then run the command below another terminal. The output should match the configuration in the file. Note that each ric now has the state "AVAILABLE".

Code Block
languagebashthemeMidnight
curl localhost:8081/a1-policy/v2/rics

Expected output - all state should indicated AVAILABLE:

Code Block
theme
languagebashMidnight
{"rics":[{"ric_id":"ric1","managed_element_ids":["kista_1","kista_2"],"policytype_ids":[],"state":"AVAILABLE"},{"ric_id":"ric3","managed_element_ids":["kista_5","kista_6"],"policytype_ids":[],"state":"AVAILABLE"},{"ric_id":"ric2","managed_element_ids":["kista_3","kista_4"],"policytype_ids":[""],"state":"AVAILABLE"}]}

...

Put the policy type to ric1

Code Block
languagebashthemeMidnight
curl -X PUT -v "http://localhost:8085/a1-p/policytypes/123" -H "accept: application/json" \
 -H "Content-Type: application/json" --data-binary @osc_pt1.json

...

Put the policy type to ric3

Code Block
languagebashthemeMidnight
curl -X PUT -v "http://localhost:8087/policytype?id=std_pt1" -H "accept: application/json"  -H "Content-Type: application/json" --data-binary @std_pt1.json

...

List the synchronised types.

Code Block
languagebashthemeMidnight
curl localhost:8081/a1-policy/v2/policy-types

Expected output:

Code Block
languagebashthemeMidnight
{"policytype_ids":["","123","std_pt1"]}

...

Run the following command to start the information coordinator service.

Code Block
theme
languagebashMidnight
docker run --rm -p 8083:8083 -p 8434:8434 --network=nonrtric-docker-net --name=information-service-container nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-information-coordinator-service:1.2.0

...

Verify that the Information Coordinator Service is started and responding (response is an empty array).

Code Block
languagebashthemeMidnight
curl localhost:8083/data-producer/v1/info-types

Expected output:

Code Block
theme
languagebashMidnight
[ ]

For troubleshooting/verification purposes you can view/access the full swagger API from url: http://localhost:8083/swagger-ui/index.html?configUrl=/v3/api-docs/swagger-config

...

Run the following command to start the gateway. Replace "<absolute-path-to-file>" with the the path to the created application.yaml.

Code Block
languagebashthemeMidnight
docker run --rm -v <absolute-path-to-config-file>/application.yaml:/opt/app/nonrtric-gateway/config/application.yaml -p 9090:9090 --network=nonrtric-docker-net --name=nonrtric-gateway  nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-gateway:1.1.0

...

Run the following two commands to check that the services can be reached through the gateway

Code Block
languagebashthemeMidnight
curl localhost:9090/a1-policy/v2/rics

Expected output

Code Block
languagebashthemeMidnight
{"rics":[{"ric_id":"ric1","managed_element_ids":["kista_1","kista_2"],"policytype_ids":["123"],"state":"AVAILABLE"},{"ric_id":"ric3","managed_element_ids":["kista_5","kista_6"],"policytype_ids":["std_pt1"],"state":"AVAILABLE"},{"ric_id":"ric2","managed_element_ids":["kista_3","kista_4"],"policytype_ids":[""],"state":"AVAILABLE"}]}

Second command:

Code Block
languagebashthemeMidnight
curl localhost:9090/data-producer/v1/info-types

Expected output:

Code Block
theme
languagebashMidnight
[ ]


Create the config file for the control panel.

...

Run the following command to start the control panel. Replace "<absolute-path-to-file>" with the the path to the created nginx.conf.

Code Block
languagebashthemeMidnight
docker run --rm -v <absolute-path-to-config-file>/nginx.conf:/etc/nginx/nginx.conf -p 8080:8080 --network=nonrtric-docker-net --name=control-panel  nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-controlpanel:2.3.0

...

Start the App Catalogue Service by the following command.

Code Block
theme
languagebashMidnight
docker run --rm -p 8680:8680 -p 8633:8633 --network=nonrtric-docker-net --name=rapp-catalogue-service nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-r-app-catalogue:1.1.0

...

Verify that the service is up and running

Code Block
languagebashthemeMidnight
curl localhost:8680/services

...

Change dir to 'helm-manger' in the downloaded nonrtric repo

Code Block
languagebash
themeMidnight
$ cd <path-repos>/nonrtric/helm-manager

Start the helm manger in a separate shell by the following command:

Code Block
languagebash
themeMidnight
docker run \
    --rm  \
    -it \
    -p 8112:8083  \
    --name helmmanagerservice \
    --network nonrtric-docker-net \
    -v $(pwd)/mnt/database:/var/helm-manager/database \
    -v ~/.kube:/root/.kube \
    -v ~/.helm:/root/.helm \
    -v ~/.config/helm:/root/.config/helm \
    -v ~/.cache/helm:/root/.cache/helm \
    -v $(pwd)/config/KubernetesParticipantConfig.json:/opt/app/helm-manager/src/main/resources/config/KubernetesParticipantConfig.json \
    -v $(pwd)/config/application.yaml:/opt/app/helm-manager/src/main/resources/config/application.yaml \
    nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-helm-manager:1.1.0

Make sure the app has started by listing the current charts - response should be empty json array. 

Code Block
languagebashthemeMidnight
$ curl localhost:8112/helm/charts
{"charts":[]}

...

Start a. chartmuseum chart repository in a separate shell

Code Block
theme
languagebashMidnight
$ docker run --rm -it \
    -p 8222:8080 \
    --name chartmuseum \
    --network nonrtric-docker-net \
    -e DEBUG=1 \
    -e STORAGE=local \
    -e STORAGE_LOCAL_ROOTDIR=/charts \
    -v $(pwd)/charts:/charts \
    ghcr.io/helm/chartmuseum:v0.13.1

Add the chart repo to the helm manager by the following command:

Code Block
languagebashthemeMidnight
$ docker exec -it helmmanagerservice sh
# helm repo add cm http://chartmuseum:8080$ helm package simple-app"cm" has been added to your repositories
$ exit

Create a dummy helm chart for test and package the chart 

Code Block
theme
languagebashMidnight
$ helm create simple-app
Creating simple-app$ helm package simple-appSuccessfully packaged chart and saved it to: <path-in-current-dir>/helm-manager/tmp

...

As an alternative, run the script 'test.sh' to execute a full sequence of commands.

Code Block
languagebashthemeMidnight
// Get charts
$ curl -sw %{http_code} http://localhost:8112/helm/charts
{"charts":[]}200

// Onboard the chart
curl -sw %{http_code} http://localhost:8112/helm/charts -X POST -F chart=@simple-app-0.1.0.tgz -F values=@simple-app-values.yaml -F "info=<simple-app.json"
 Curl OK
  Response: 200
  Body: 

// List the chart(s)
$ curl -sw %{http_code} http://localhost:8112/helm/charts
{"charts":[{"releaseName":"simpleapp","chartName":"simple-app","version":"0.1.0","namespace":"ckhm","repository":"cm"}]}200

// Install the chart (app) - note: the namespace ckhm is created in kubernetes
$curl -sw %{http_code} http://localhost:8112/helm/install -X POST -H Content-Type:application/json -d @simple-app-installation.json
201 

=====================
Get apps - simple-app
=====================
$curl -sw %{http_code} http://localhost:8112/helm/charts
 Curl OK
  Response: 200
  Body: {"charts":[{"releaseName":"simpleapp","chartName":"simple-app","version":"0.1.0","namespace":"ckhm","repository":"cm"}]}

//List the installed chart using helm ls
// helm ls -A
NAME     	NAMESPACE	REVISION	UPDATED                               	STATUS  	CHART           	APP VERSION
simpleapp	ckhm     	1       	2021-12-02 20:53:21.52883025 +0000 UTC	deployed	simple-app-0.1.0	1.16.0     

// List the service and pods in kubernetes - may take a few seconds until the object are created in kubernetes
$ kubectl get svc -n ckhm
NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
simpleapp-simple-app   ClusterIP   10.107.86.92   <none>        80/TCP    30s
$ kubectl get pods -n ckhm
NAME                                    READY   STATUS    RESTARTS   AGE
simpleapp-simple-app-675f44fc99-wpd6g   1/1     Running   0          31s


...

In addition, if the data is available on a kafka topic then a an instance of a running kafka server is needed.

...

Data posted on the message router topic unauthenticated.dmaapadp.json will be delivered to the path as specified inn the job1.sonin the job1.json.

If the kafka type also used, setup a job for that type as well.

Create a file job2.json with the job definition (replace paths <url-for-jod-data-delivery> and <url-for-jod-status-delivery> to fit your enviroment:

Code Block
languagebash
titlejob1.json
{
  "info_type_id": "ExampleInformationTypeKafka",
  "job_result_uri": "<url-for-jod-data-delivery>",
  "job_owner": "job1owner",
  "status_notification_uri": "<url-for-jod-status-delivery>",
  "job_definition": {}
}

Create job1 for type 'ExampleInformationType'

Code Block
languagebash
curl -X PUT -H Content-Type:application/json https://localhost:8083/data-consumer/v1/info-jobs/job1 --data-binary @job2.json

Data posted on the kafka topic unauthenticated.dmaapadp_kafka.text will be delivered to the path as specified in the job1.json.

Run the Dmaap Mediator Producer

The Dmaap Mediator Producer needs one configuration file for the types the application supports.

Note that a running Information Coordinator Service is needed for creating jobs and a running message router is needed for receiving data that the job can distribute to the consumer. 

Create the file type_config.json with the content below:

Code Block
languagebash
titletype_config.json
{
   "types":
     [
       {
         "id": "STD_Fault_Messages",
         "dmaapTopicUrl": "/events/unauthenticated.dmaapmed.json/dmaapmediatorproducer/STD_Fault_Messages?timeout=15000&limit=100"
       }
   ]
 }

There are a number of environment variables that need to be set when starting the application. See these example settings:

Start the Dmaap Mediator Producer in a separate shell with the following command:

Code Block
languagebash
docker run --rm -v \
<absolute-path-to-config-file>/type_config.json:/configs/type_config.json \
-p 8085:8085 -p 8185:8185 --network=nonrtric-docker-net --name=dmaapmediatorservice \
-e "INFO_COORD_ADDR=https://informationservice:8434" \
-e "DMAAP_MR_ADDR=https://message-router:3905" \
-e "LOG_LEVEL=Debug" \
-e "INFO_PRODUCER_HOST=https://dmaapmediatorservice" \
-e "INFO_PRODUCER_PORT=8185" \
nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-dmaap-mediator-producer:1.0.0

Setup jobs to produce data according to the types in type_config.json

Create a file job3.json with the job definition (replace paths <url-for-jod-data-delivery> and <url-for-jod-status-delivery> to fit your environment:

Code Block
languagebash
titlejob3.json
{
  "info_type_id": "STD_Fault_Messages",
  "job_result_uri": "<url-for-jod-data-delivery>",
  "job_owner": "job3owner",
  "status_notification_uri": "<url-for-jod-status-delivery>",
  "job_definition": {}
}

Create job1 for type 'ExampleInformationType'

Code Block
languagebash
curl -X PUT -H Content-Type:application/json https://localhost:8083/data-consumer/v1/info-jobs/job3 --data-binary @job3.json

Data posted on the message router topic unauthenticated.dmaapmed.json will be delivered to the path as specified in the jo2.json