Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

DRAFT

This page describes how to get the release F G version of Non-RT RIC up and running locally with locally with three separate A1 simulator (previously called Near-RT RIC A1 Interface) docker containers providing STD_1.1.3, STD_2.0.0 and OSC_2.1.0 versions of the A1 interface.

All components of the Non-RT RIC run as docker containers and communicate via a private docker network with container ports, ports also available on localhost. Details of the architecture can be found from Release G page.


Table of Contents

Project Requirements

  • Docker and docker-compose (latest)

  • Additional optional requirements if using the "Helm Manager" function
    • kubernetes v1.19+
    • kubectl with admin access to kubernetes (minikube, docker-desktop kubernetes etc) 
    -  this is only applicable when running the Helm Manager
    • helm with access to kubernetes -
     this
    • this is only applicable when running the Helm Manager example
    operations
    • operations 

Images

The images used for running the Non-RT RIC can be selected from the table below depending on if the images are built manually (snapshot image) or if release images shall be used.

...

The run commands throughout this page uses the release images and tags. Replace the release images/tags in the container run commands in the instructions if manually-built snapshot images are desired. 

G Maintenance Release images

Component

Image

Tag

A1 Policy Management Service

nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-plt-a1policymanagementservice

2.5.0
Information Coordinator Service nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-plt-informationcoordinatorservice1.4.0
NONRTRIC Control Panel

nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-controlpanel

2.4.0
Gatewaynexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-gateway 1.0.0
A1-Simulator

nexus3.o-ran-sc.org:10002/o-ran-sc/a1-simulator

2.4.0

R-APP Catalogue

nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-plt-rappcatalogue

1.1.0
R-APP Catalogue Enhancednexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-plt-rappcatalogue-enhanced1.0.1
DMaaP Adapternexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-plt-dmaapadapter1.2.0
DMaaP Mediatornexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-plt-dmaapmediatorproducer1.1.0
Helm Managernexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-plt-helmmanager1.2.0
Auth Token Fetchernexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-plt-auth-token-fetch1.0.0
CAPIF Corenexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-plt-capifcore:1.0.01.0.0
O-DU Slice Assurancenexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-rapp-ransliceassurance1.2.0
O-DU Slice Assurancenexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-rapp-ransliceassurance-icsversion1.1.0
O-RU FH Recoverynexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-rapp-orufhrecovery1.1.0
O-RU FH Recoverynexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-rapp-orufhrecovery-consumer1.1.0

Note: A version of this table appears Integration&Testing - G Release Docker Image List - NONRTRIC (G-Release). This is the authoritative version! 

Ports

The following ports will be allocated and exposed to localhost for each component. If other port(s) are desired, then the ports need to be replaced in the container run commands in the instructions further below.

...

Component

Port exposed to localhost (http/https)

A1 Policy Management Service

8081/8443

A1 Simulator (previously called Near-RT RIC A1 Interface) 

8085/8185,  8086/8186, 8087/8187

Information Coordinator Service

8083/8434

Non-RT RIC Control Panel

8080/8880

SDNC A1-Controller

8282/8443

Gateway9090 (only http)
App Catalogue Service8680/8633
Helm Manager8112 (only http)
Dmaap Mediator Producer9085/9185
Dmaap Adapter Service

9087/9187

Capifcore8090 (only http)


Prerequisites

The containers need to be connected to docker network in order to communicate with each other.

...

docker network create nonrtric-docker-net

Run the A1 Policy Management Service Docker Container

To support local test with three separate A1 simulator (previously called Near-RT RIC A1 Interface) instances, each running a one of the three available A1 Policy interface versions:  

The policy management service can be configure with or without a A-Controller. Choose the appropriate configuration below.A1 Policy Mnagement Service can be configured to support A1 connection via an SDNC A1-Controller for some or all rics/simulators. It is optional to access the near-RT-RIC through an A1-Controller.
This is configured using the optional "controller" paramter for each ric entry. If all configured rics bypass the A1-Controller (do not have "controller" values) then the "controller" object at the top of the configuration can be omitted. If all configured rics bypass the A1-Controller there is no need to start an A1-Controller.

This sample configuration This file is for running without the SDNC A1-Controller

Code Block
titleapplication_configuration.json
{
  "config": {
    "ric": [
      {
        "name": "ric1",
        "baseUrl": "http://ric1:8085/",
        "managedElementIds": [
          "kista_1",
          "kista_2"
        ]
      },
      {
        "name": "ric2",
        "baseUrl": "http://ric2:8085/",
        "managedElementIds": [
          "kista_3",
          "kista_4"
        ]
      },
      {
        "name": "ric3",
        "baseUrl": "http://ric3:8085/",
        "managedElementIds": [
          "kista_5",
          "kista_6"
        ]
      }
    ]
  }
}


This file sample configuration is for running with the SDNC A1-Controller.

Code Block
titleapplication_configuration.json
{
    "config": {
        "controller": [
            {
                "name": "a1controller",
                "baseUrl": "https://a1controller:8443",
                "userName": "admin",
                "password": "Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U"
            }
        ],
        "ric": [
            {
                "name": "ric1",
                "baseUrl": "http://ric1:8085/",
                "controller": "a1controller",
                "managedElementIds": [
                    "kista_1",
                    "kista_2"
                ]
            },
            {
                "name": "ric2",
                "baseUrl": "http://ric2:8085/",
                "controller": "a1controller",
                "managedElementIds": [
                    "kista_3",
                    "kista_4"
                ]
            },
            {
                "name": "ric3",
                "baseUrl": "http://ric3:8085/",
                "controller": "a1controller",
                "managedElementIds": [
                    "kista_5",
                    "kista_6"
                ]
            }
        ]
    }
}

...

{"rics":[{"ric_id":"ric1","managed_element_ids":["kista_1","kista_2"],"policytype_ids":[],"state":"UNAVAILABLE"},{"ric_id":"ric3","managed_element_ids":["kista_5","kista_6"],"policytype_ids":[],"state":"UNAVAILABLE"},{"ric_id":"ric2","managed_element_ids":["kista_3","kista_4"],"policytype_ids":[],"state":"UNAVAILABLE"}]}

Run the SDNC A1 Controller Docker Container (ONAP SDNC)

This step is only applicable if the configuration for the Policy Management Service include the SDNC A1 Controller (second config option), see Run the Policy Management Service Docker Container.

Create the docker compose file - be sure to update image for the a1controller to the one listed for SDNC A1 Controller in the table on the top of this page.
docker-compose.yaml Expand source

Code Block
titledocker-compose.yaml
version: '3'
 
networks:
  default:
    external: true
    name: nonrtric-docker-net
 
services:
  db:
    image: nexus3.o-ran-sc.org:10001/mariadb:10.5
    container_name: sdncdb
    networks:
      - default
    ports:
      - "3306"
    environment:
      - MYSQL_ROOT_PASSWORD=itsASecret
      - MYSQL_ROOT_HOST=%
      - MYSQL_USER=sdnctl
      - MYSQL_PASSWORD=gamma
      - MYSQL_DATABASE=sdnctl
    logging:
      driver:   "json-file"
      options:
        max-size: "30m"
        max-file: "5"
 
  a1controller:
    image: nexus3.onap.org:10002/onap/sdnc-image:2.4.2
    depends_on :
      - db
    container_name: a1controller
    networks:
      - default
    entrypoint: ["/opt/onap/sdnc/bin/startODL.sh"]
    ports:
      - 8282:8181
      - 8443:8443
    links:
      - db:dbhost
      - db:sdnctldb01
      - db:sdnctldb02
    environment:
      - MYSQL_ROOT_PASSWORD=itsASecret
      - MYSQL_USER=sdnctl
      - MYSQL_PASSWORD=gamma
      - MYSQL_DATABASE=sdnctl
      - SDNC_CONFIG_DIR=/opt/onap/sdnc/data/properties
      - SDNC_BIN=/opt/onap/sdnc/bin
      - ODL_CERT_DIR=/tmp
      - ODL_ADMIN_USERNAME=admin
      - ODL_ADMIN_PASSWORD=Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U
      - ODL_USER=admin
      - ODL_PASSWORD=Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U
      - SDNC_DB_INIT=true
      - A1_TRUSTSTORE_PASSWORD=a1adapter
      - AAI_TRUSTSTORE_PASSWORD=changeit
    logging:
      driver:   "json-file"
      options:
        max-size: "30m"
        max-file: "5"

...

docker execa1controller sh -c "tail -f /opt/opendaylight/data/log/karaf.log"

Run the A1 Simulator (previously called Near-RT RIC A1 Interface) Docker Containers

Start a simulator for each ric defined in in the application_configuration.json created in Run the Policy Management Service Docker Container. Each simulator will use one of the currently available A1 interface versions.

...

{"policytype_ids":["","123","std_pt1"]}

Run the Information Coordinator Service Docker Container

Run the following command to start the information coordinator service.

...

For troubleshooting/verification purposes you can view/access the full swagger API from url: http://localhost:8083/swagger-ui/index.html?configUrl=/v3/api-docs/swagger-config

Run the Non-RT RIC Gateway and Control Panel Docker Container

The Gateway exposes the interfaces of the Policy Management Service and the Information Coordinator Service to a single port of the gateway. This single port is then used by the control panel to access both services.

...

The webbased UI can be accessed by pointing the web-browser to this URL: 

http://localhost:8080/

Run the App Catalogue Service Docker Container

Start the App Catalogue Service by the following command.

...

curl localhost:8680/services

Expected output:

[ ]


Run the App Catalogue Enhanced Service Docker Container

Start the App Catalogue Enhanced Service by the following command.

docker run --rm -p 9096:9096 -p 9196:9196 -e ALLOW_HTTP=true --network=nonrtric-docker-net --name=rapp-catalogue-service-enhanced nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-plt-rappcatalogue-enhanced:1.0.1


Verify that the service is up and running

curl localhost:9096/rappcatalogue

Expected output:

[ ]


Run the Helm Manager Docker Container

Note: Access to kubernetes is required as stated the requirements on the top of this page.

...

Start the helm manger in a separate shell by the following command:

$ docker run \
    --rm    \
    -it \
    -p 8112:8083  8083 \
    --name helmmanagerservice \
    --network nonrtric-docker-net \
    -v   $(pwd)/mnt/database:/var/helm-manager-service   \
    -v   ~/.kube:/home/nonrtric/.kube \
    -v   ~/.helm:/home/nonrtric/.helm \
    -v   ~/.config/helm:/home/nonrtric/.config/helm   \
    -v   ~/.cache/helm:/home/nonrtric/.cache/helm   \
    -v   $(pwd)/config/application.yaml:/etc/app/helm-manager/application.yaml \
    nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-plt-helmmanager:1.2.0

...

Start a chartmuseum chart repository in a separate shell

$ docker run --rm   -it \
    -p 8222:8080 \
    --name chartmuseum \
    --network nonrtric-docker-net \
    -e DEBUG=1 \
    -e STORAGE=local   \
    -e STORAGE_LOCAL_ROOTDIR=/charts   \
    -v   $(pwd)/charts:/charts   \
    ghcr.io/helm/chartmuseum:v0.13.1

Add the chart repo to the helm manager by the following command:

$ docker   exec   -it helmmanagerservice helm repo add cm http://chartmuseum:8080
"cm"
  has been added to your repositories

...

$ helm create simple-app
Creating simple-app$ helm package simple-appSuccessfully packaged chart and saved it to: <path-in-current-dir>/helm-manager/tmpapp


$ helm package simple-app
Successfully packaged chart and saved it to: <path>/simple-app-0.1.0.tgz
 
$ curl --data-binary @simple-app-0.1.0.tgz -X POST http://localhost:8222/api/charts
{"saved":true}

The commands below show examples of operations towards the helm manager using the dummy chart.

...

$ ./test.sh docker
 
Start test
================
Get apps - empty
================
curl -sw %{http_code} http://helmadmin:itisasecret@localhost:8112/helm/charts
 Curl OK
  Response: 200
  Body: {"charts":[]}
 
================
Add repo
================
curl -sw %{http_code} http://helmadmin:itisasecret@localhost:8112/helm/repo -X POST -H Content-Type:application/json -d @cm-repo.json
 Curl OK
  Response: 201
  Body:
 
============
Onboard app
===========
curl -sw %{http_code} http://helmadmin:itisasecret@localhost:8112/helm/onboard/chart -X POST -F chart=@simple-app-0.1.0.tgz -F values=@simple-app-values.yaml -F info=<simple-app.json
 Curl OK
  Response: 200
  Body:
 
=====================
Get apps - simple-app
=====================
curl -sw %{http_code} http://helmadmin:itisasecret@localhost:8112/helm/charts
 Curl OK
  Response: 200
  Body: {"charts":[{"releaseName":"simpleapp","chartId":{"name":"simple-app","version":"0.1.0"},"namespace":"ckhm","repository":{"repoName":"cm","protocol":null,"address":null,"port":null,"userName":null,"password":null},"overrideParams":null}]}
 
===========
Install app
===========
curl -sw %{http_code} http://helmadmin:itisasecret@localhost:8112/helm/install -X POST -H Content-Type:application/json -d @simple-app-installation.json
 Curl OK
  Response: 201
  Body:
 
=====================
Get apps - simple-app
=====================
curl -sw %{http_code} http://helmadmin:itisasecret@localhost:8112/helm/charts
 Curl OK
  Response: 200
  Body: {"charts":[{"releaseName":"simpleapp","chartId":{"name":"simple-app","version":"0.1.0"},"namespace":"ckhm","repository":{"repoName":"cm","protocol":null,"address":null,"port":null,"userName":null,"password":null},"overrideParams":null}]}
 
=================================================================
helm ls to list installed app - simpleapp chart should be visible
=================================================================
NAME        NAMESPACE   REVISION    UPDATED                                 STATUS      CHART               APP VERSION
simpleapp   ckhm        1           2022-06-27 21:18:27.407666475 +0000 UTC deployed    simple-app-0.1.0    1.16.0    
 
==========================================
sleep 30 - give the app some time to start
==========================================
============================
List svc and  pod of the app
============================
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
simpleapp-simple-app   ClusterIP   10.98.120.189   <none>        80/TCP    30s
NAME                                    READY   STATUS    RESTARTS   AGE
simpleapp-simple-app-675f44fc99-qsxr6   1/1     Running   0          30s
 
========================
Uninstall app simple-app
========================
curl -sw %{http_code} http://helmadmin:itisasecret@localhost:8112/helm/uninstall/simple-app/0.1.0 -X DELETE
 Curl OK
  Response: 204
  Body:
 
===========================================
sleep 30 - give the app some time to remove
===========================================
============================================================
List svc and  pod of the app - should be gone or terminating
============================================================
No resources found in ckhm namespace.
No resources found in ckhm namespace.
 
=====================
Get apps - simple-app
=====================
curl -sw %{http_code} http://helmadmin:itisasecret@localhost:8112/helm/charts
 Curl OK
  Response: 200
  Body: {"charts":[{"releaseName":"simpleapp","chartId":{"name":"simple-app","version":"0.1.0"},"namespace":"ckhm","repository":{"repoName":"cm","protocol":null,"address":null,"port":null,"userName":null,"password":null},"overrideParams":null}]}
 
============
Delete chart
===========
curl -sw %{http_code} http://helmadmin:itisasecret@localhost:8112/helm/chart/simple-app/0.1.0 -X DELETE
 Curl OK
  Response: 204
  Body:
 
================
Get apps - empty
================
curl -sw %{http_code} http://helmadmin:itisasecret@localhost:8112/helm/charts
 Curl OK
  Response: 200
  Body: {"charts":[]}
 
Test result  All tests ok
End of test

...


Run

...

the DMaaP Adapter Service Docker Container

The Dmaap DMaaP Adapter Service needs two configurations files, one for the application specific parameters and one for the types the application supports.

...

The following parameter need to be configured according to hosts and ports  (these setting may need to adjusted to your environment)

  • ics-base-url: https://information-service-container:8434/
  • dmaap-base-url: https://message-router:3905/  (needed when data is received from the Dmaap message router)
  • bootstrap-servers: message-router-kafka:9092 (needed when data is received on a kafka topic)


Code Block
languageyml

...

titleapplication.yaml 
collapsetrue
spring:
  profiles:
    active: prod
  main:
    allow-bean-definition-overriding: true
  aop:
    auto: false
management:
  endpoints:
    web:
      exposure:
        # Enabling of springboot actuator features. See springboot documentation.
        include: "loggers,logfile,health,info,metrics,threaddump,heapdump"
springdoc:
  show-actuator: true
logging:
  # Configuration of logging
  level:
    ROOT: ERROR
    org.springframework: ERROR
    org.springframework.data: ERROR
    org.springframework.web.reactive.function.client.ExchangeFunctions: ERROR
    org.oran.dmaapadapter: INFO
  file:
    name: /var/log/dmaap-adapter-service/application.log
server:
   # Configuration of the HTTP/REST server. The parameters are defined and handeled by the springboot framework.
   # See springboot documentation.
   port : 8435
   http-port: 8084
   ssl:
      key-store-type: JKS
      key-store-password: policy_agent
      key-store: /opt/app/dmaap-adapter-service/etc/cert/keystore.jks
      key-password: policy_agent
      key-alias: policy_agent
app:
  webclient:
    # Configuration of the trust store used for the HTTP client (outgoing requests)
    # The file location and the password for the truststore is only relevant if trust-store-used == true
    # Note that the same keystore as for the server is used.
    trust-store-used: false
    trust-store-password: policy_agent
    trust-store: /opt/app/dmaap-adapter-service/etc/cert/truststore.jks
    # Configuration of usage of HTTP Proxy for the southbound accesses.
    # The HTTP proxy (if configured) will only be used for accessing NearRT RIC:s
    http.proxy-host:
    http.proxy-port: 0
  ics-base-url: https://information-service-container:8434
  # Location of the component configuration file. The file will only be used if the Consul database is not used;
  # configuration from the Consul will override the file.
  configuration-filepath: /opt/app/dmaap-adapter-service/data/application_configuration.json
  dmaap-base-url: https://message-router:3905
  # The url used to address this component. This is used as a callback url sent to other components.
  dmaap-adapter-base-url: https://dmaapadapterservice:8435
  zip-output: false
  # KAFKA boostrap server. This is only needed if there are Information Types that uses a kafkaInputTopic
  kafka:
    bootstrap-servers: message-router-kafka:9092


Create the file application_configuration.json according to one of alternatives below.
 application_configuration.json without kafka type
 application


Code Block
titleapplication_configuration.json without kafka type
{

...


  "types":

...

 [
     {
        "id":

...

 "ExampleInformationType",

...


        "dmaapTopicUrl":

...

 "/events/unauthenticated.dmaapadp.json/dmaapadapterproducer/msgs?timeout=15000&limit=100",

...


        "useHttpProxy":

...

 false
     }
  ]
}


Code Block
titleapplication

...

_configuration.json with kafka type
{

...


  "types":

...

 [
     {
        "id":

...

 "ExampleInformationType",

...


        "dmaapTopicUrl":

...

 "/events/unauthenticated.dmaapadp.json/dmaapadapterproducer/msgs?timeout=15000&limit=100",

...


        "useHttpProxy":

...

 false
     },
     {
      "id":

...

 "ExampleInformationTypeKafka",

...


      "kafkaInputTopic":

...

 "unauthenticated.dmaapadp_kafka.text",

...


      "useHttpProxy":

...

 false
   }
  ]
}


Start the Dmaap Adapter Service in a separate shell with the following command:

docker run --rm \
-v <absolute-path-to-config-file>/application.yaml:/opt/app/dmaap-adapter-service/config/application.yaml \
-v <absolute-path-to-config-file>/application_configuration.json:/opt/app/dmaap-adapter-service/data/application_configuration.json \
-p 9086:8084 -p 9087:8435 --network=nonrtric-docker-net --name=dmaapadapterservice  dmaapadapterservice nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-plt-dmaapadapter:1.12.10

Setup jobs to produce data according to the types in application_configuration.json

Create a file job1.json with the job definition (replace paths <url-for-jod-data-delivery> and <url-for-jod-status-delivery> to fit your environment:


Code Block
titlejob1.json
{

...


  "info_type_id":

...

 "ExampleInformationType",

...


  "job_result_uri":

...

 "<url-for-jod-data-delivery>",

...


  "job_owner":

...

 "job1owner",

...


  "status_notification_uri":

...

 "<url-for-jod-status-delivery>",

...


  "job_definition": {}

...


}


Create job1 for type 'ExampleInformationType'

curl -k -X PUT -H Content-Type:application/json https://localhost:8434/data-consumer/v1/info-jobs/job1 --data-binary @job1.json

Check that the job has been enabled - job accepted by the Dmaap Adapter Service

...

Create a file job2.json with the job definition (replace paths <url-for-jod-data-delivery> and <url-for-jod-status-delivery> to fit your enviroment:


Code Block
titlejob2.json
{

...


  "info_type_id":

...

 "ExampleInformationTypeKafka",

...


  "job_result_uri":

...

 "<url-for-jod-data-delivery>",

...


  "job_owner":

...

 "job1owner",

...


  "status_notification_uri":

...

 "<url-for-jod-status-delivery>",

...


  "job_definition": {}

...


}


Create job2 for type 'ExampleInformationType'

curl -k -X PUT -H Content-Type:application/json https://localhost:8434/data-consumer/v1/info-jobs/job2 --data-binary @job2.json

Check that the job has been enabled - job accepted by the Dmaap Adapter Service

...

Data posted on the kafka topic unauthenticated.dmaapadp_kafka.text will be delivered to the path as specified in the job2.json.

Run the Dmaap Mediator Producer Docker Container

The Dmaap Mediator Producer needs one configuration file for the types the application supports.

...

Create the file type_config.json with the content below:


Code Block
titletype_config.json
{

...


    "types":

...

 [
        {
            "id":

...

 "STD_Fault_Messages",

...


            "dmaapTopicUrl":

...

 "/events/unauthenticated.dmaapmed.json/dmaapmediatorproducer/STD_Fault_Messages?timeout=15000&limit=100"

...


        }
    ]
}


There are a number of environment variables that need to be set when starting the application. See these example settings:

...

Start the Dmaap Mediator Producer in a separate shell with the following command:

docker run --rm

 

-v

 

\
<absolute-path-to-config-

file>

file>/type_config.json:/configs/type_config.json \
-p 8885:8085 -p 8985:8185 --network=nonrtric-docker-net --name=dmaapmediatorservice \
-e

 

"INFO_COORD_ADDR=https://information-service-container:8434"

 

\
-e

 

"DMAAP_MR_ADDR=https://message-router:3905"

 

\
-e

 

"LOG_LEVEL=Debug"

 

\
-e

 

"INFO_PRODUCER_HOST=https://dmaapmediatorservice"

 

\
-e

 

"INFO_PRODUCER_PORT=8185"

 

\
nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-plt-dmaapmediatorproducer:1.


1.0



Setup jobs to produce data according to the types in type_config.json

Create a file job3.json with the job definition (replace paths <url-for-jod-data-delivery> and <url-for-jod-status-delivery> to fit your environment:

Code Block
titlejob3.json
{
  "info_type_id": "STD_Fault_Messages",
  "job_result_uri": "<url-for-job-data-delivery>",
  "job_owner": "job3owner",
  "status_notification_uri": "<url-for-

...

job-

...

status-delivery>",
  "job_definition": {}
}


Create job3 for type 'ExampleInformationType'

curl -k -v -X PUT -H Content-Type:application/json https://localhost:8434/data-consumer/v1/info-jobs/job3 --data-binary @job3.json

Check that the job has been enabled - job accepted by the Dmaap Adapter Service

curl -k https://localhost:8434/A1-EI/v1/eijobs/job3/status
{"eiJobStatus":"ENABLED"}

Data posted on the message router topic unauthenticated.dmaapmed.json will be delivered to the path as specified in the job3.json.


Run SME CAPIF Core

Start the CAPIF Core in a separate shell with the following command:

docker run --rm -v \
<absolute-path-to-config-file>/type_config.json:/configs/type_config.json \
-p 8090:8090 --network=nonrtric-docker-net --name=capifcore \
nexus3.o-ran-sc.org:10002/o-ran-sc/nonrtric-plt-capifcore:1.0.0

This is a basic start command without helm. See repo README for more options.

Check that the component has started.

curl localhost:8090
Hello, World!

...

{
  "info_type_id": "STD_Fault_Messages",
  "job_result_uri": "<url-for-job-data-delivery>",
  "job_owner": "job3owner",
  "status_notification_uri": "<url-for-job-status-delivery>",
  "job_definition": {}
}

Create job3 for type 'ExampleInformationType'

curl -X PUT -H Content-Type:application/json https://localhost:8083/data-consumer/v1/info-jobs/job3 --data-binary @job3.json

Check that the job has been enabled - job accepted by the Dmaap Adapter Service

curl -k https://localhost:8434/A1-EI/v1/eijobs/job3/status
{"eiJobStatus":"ENABLED"}

Data posted on the message router topic unauthenticated.dmaapmed.json will be delivered to the path as specified in the job3.json

Run usecases

Within NON-RT RIC a number of usecase implementations are provided. Follow the links below to see how to run them.

...


Run use cases

Within NON-RT RIC a number of use case implementations are provided. Follow the links below to see how to run them.

...