Skip to end of metadata
Go to start of metadata
VM Minimum Requirements for RIC 22

NOTE: sudo access is required for installation

Getting Started PDF

Step 1: Obtaining the Deployment Scripts and Charts

Run ...

$ sudo -i

$ git clone http://gerrit.o-ran-sc.org/r/it/dep -b bronze

$ cd dep
$ git submodule update --init --recursive --remote
Step 2: Generation of cloud-init Script 

Run ...

$ cd tools/k8s/bin
./gen-cloud-init.sh   # will generate the stack install script for what RIC needs

Note: The outputted script is will be used for preparing K8 cluster for RIC deployment (k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh)

Step 3: Installation of Kubernetes, Helm, Docker, etc.

Run ...

$ ./k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh

NOTE: Be patient as this takes some time to complete. Upon completion of this script, the VM will be rebooted.  You will then need to login to the VM and run sudo once again.

$ sudo -i

$ kubectl get pods --all-namespaces  # There should be  9 pods running in kube-system namespace.

Step 4:  Deploy RIC using Recipe

Run ...

$ cd dep/bin
$ ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml
$ kubectl get pods -n ricplt   # There should be ~16 pods running in the ricplt namespace.  


Step 5:  Onboarding a Test xAPP (HelloWorld xApp)

NOTE: If using a version less than Ubuntu 18.04 this section will fail!
Run...

$ cd dep

# Create the file that will contain the URL used to start the on-boarding process...
$ echo '{ "config-file.json_url": "https://gerrit.o-ran-sc.org/r/gitweb?p=ric-app/hw.git;a=blob_plain;f=init/config-file.json;hb=HEAD" }' > onboard.hw.url

# Start on-boarding process...

curl --location --request POST "http://$(hostname):32080/onboard/api/v1/onboard/download"  --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"


# Verify list of all on-boarded xApps...
$ curl --location --request GET "http://$(hostname):32080/onboard/api/v1/charts"
Step 6:  Deploy Test xApp (HelloWorld xApp)

Run..

#  Verify xApp is not running...  This may take a minute so refresh the command below

kubectl get pods -n ricxapp


# Call xApp Manager to deploy HelloWorld xApp...

curl --location --request POST "http://$(hostname):32080/appmgr/ric/v1/xapps"  --header 'Content-Type: application/json'  --data-raw '{"xappName": "hwxapp"}'


#  Verify xApp is running...

kubectl get pods -n ricxapp


#  View logs...

$ kubectl logs -n ricxapp <name of POD retrieved from statement above>


Helpful Hints

Kubectl commads:

kubectl get pods -n nampespace - gets a list of Pods running

kubectl get logs -n namespace name_of_running_pod






Complete these tasks to get started


Recent space activity

Space contributors

{"mode":"list","scope":"descendants","limit":"5","showLastTime":"true","order":"update","contextEntityId":20875202}

  • No labels

27 Comments

  1. Thanks for the guidelines. It works well.

  2. Hi experts,

    after deployment, i have only 15 pods in ns ricplt, is it normal?

    The "Step 4" says: "kubectl get pods -n ricplt   # There should be ~16 pods running in the ricplt namespace.".

    (18:11 dabs@ricpltbronze bin) > sudo kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5644d7b6d9-clvtf 1/1 Running 5 32h
    kube-system coredns-5644d7b6d9-jwxnm 1/1 Running 5 32h
    kube-system etcd-ricpltbronze 1/1 Running 11 32h
    kube-system kube-apiserver-ricpltbronze 1/1 Running 28 32h
    kube-system kube-controller-manager-ricpltbronze 1/1 Running 9 32h
    kube-system kube-flannel-ds-amd64-mrwn2 1/1 Running 16 32h
    kube-system kube-proxy-zrtl8 1/1 Running 6 32h
    kube-system kube-scheduler-ricpltbronze 1/1 Running 8 32h
    kube-system tiller-deploy-68bf6dff8f-wbmwl 1/1 Running 4 32h
    ricinfra deployment-tiller-ricxapp-d4f98ff65-6h4n4 1/1 Running 0 3h13m
    ricinfra tiller-secret-generator-tgkzf 0/1 Completed 0 132m
    ricinfra tiller-secret-generator-zcx72 0/1 Error 0 3h13m
    ricplt deployment-ricplt-a1mediator-66fcf76c66-h6rp2 1/1 Running 0 40m
    ricplt deployment-ricplt-alarmadapter-64d559f769-glb5z 1/1 Running 0 30m
    ricplt deployment-ricplt-appmgr-6fd6664755-2mxjb 1/1 Running 0 42m
    ricplt deployment-ricplt-e2mgr-8479fb5ff8-9zqbp 1/1 Running 0 41m
    ricplt deployment-ricplt-e2term-alpha-bcb457df4-4dz62 1/1 Running 0 40m
    ricplt deployment-ricplt-jaegeradapter-84558d855b-tmqqb 1/1 Running 0 39m
    ricplt deployment-ricplt-o1mediator-d8b9fcdf-f4sgm 1/1 Running 0 34m
    ricplt deployment-ricplt-rtmgr-9d4847788-kf6r4 1/1 Running 10 41m
    ricplt deployment-ricplt-submgr-65dc9f4995-gt5kb 1/1 Running 0 40m
    ricplt deployment-ricplt-vespamgr-7458d9b5d-klh9l 1/1 Running 0 39m
    ricplt deployment-ricplt-xapp-onboarder-546b86b5c4-xkcpt 2/2 Running 0 42m
    ricplt r4-infrastructure-kong-6c7f6db759-7xjqm 2/2 Running 21 3h13m
    ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-jfkdg 2/2 Running 2 3h13m
    ricplt r4-infrastructure-prometheus-server-5fd7695-pprg2 1/1 Running 2 3h13m
    ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 43m

    1. Anonymous

      After deployment I'm also getting only 15 pods running. Is it normal or do I need to worry about that?

      1. it's normal, and you are good to go. just try policy management.

  3. Hi,

    Even though deployment of RIC plt did not give any errors, most of the RIC plt and infra pods remain in error state. 

    kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5644d7b6d9-dz774 1/1 Running 1 10h
    kube-system coredns-5644d7b6d9-fp586 1/1 Running 1 10h
    kube-system etcd-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-apiserver-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-controller-manager-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-flannel-ds-amd64-b4l97 1/1 Running 1 10h
    kube-system kube-proxy-fxfrk 1/1 Running 1 10h
    kube-system kube-scheduler-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system tiller-deploy-68bf6dff8f-jvtk7 1/1 Running 1 10h
    ricinfra deployment-tiller-ricxapp-d4f98ff65-hcnqf 0/1 ContainerCreating 0 9h
    ricinfra tiller-secret-generator-d7kmk 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-a1mediator-66fcf76c66-l2z7m 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-alarmadapter-64d559f769-7d8fq 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-appmgr-6fd6664755-7lp8q 0/1 Init:ImagePullBackOff 0 9h
    ricplt deployment-ricplt-e2mgr-8479fb5ff8-ggnx8 0/1 ErrImagePull 0 9h
    ricplt deployment-ricplt-e2term-alpha-bcb457df4-dkdbc 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-jaegeradapter-84558d855b-bpzcv 1/1 Running 0 9h
    ricplt deployment-ricplt-o1mediator-d8b9fcdf-5ptcs 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-rtmgr-9d4847788-rvnrx 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-submgr-65dc9f4995-cbhvc 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-vespamgr-7458d9b5d-bkzpg 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-xapp-onboarder-546b86b5c4-g2dnt 1/2 ImagePullBackOff 0 9h
    ricplt r4-infrastructure-kong-6c7f6db759-4czbj 2/2 Running 2 9h
    ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-gwxzz 2/2 Running 0 9h
    ricplt r4-infrastructure-prometheus-server-5fd7695-phcqs 1/1 Running 0 9h
    ricplt statefulset-ricplt-dbaas-server-0 0/1 ImagePullBackOff 0 9h


    any clue?

    1. Anonymous

      The ImagePullBackOff and ErrrImagePull errors are all for container images built from O-RAN SC code.  It appears that there is a problem for the docker engine in your installation fetching images from the O-RAN SC docker registry.  Oftentimes this is due to local firewall blocking such connections.

      You may want to try:  docker pull nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-a1:2.1.9 to see if your docker engine can retrieve this image. 

      1. Thanks for the information.

        I think you are right, even docker pull nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-a1:2.1.9 is giving an error of connection refused..

        although I am checking for the corporate firewall to disable it, the strange thing is docker pull hello-world and other such pulls are working fine.

        1. "Connection refused" error does suggest network connection problem.

          These O-RAN SC docker registries use port 10001 ~ 10004, in particular 10002 for all released docker images.  They are not the default docker registry of 5000.  It is possible that your local firewall has a rule allowing outgoing connections into port 5000, but not these ports.

          1. Anonymous

            Thanks for the explanation. you are right these ports were blocked on firewall, working fine now.
            currently i am stuck with opening o1 dashboard, trying it on smo machine.

          2. thanks for the explanation. you are right these particular ports were getting blocked by firewall, got it working now.

            currently i am stuck in opening o1 dashboard, trying it on smo machine.

  4. Anonymous

    Hi all,

    after executing the below POST command. Iam not getting any response.

    # Start on-boarding process...

    curl --location --request POST "http://$(hostname):32080/onboard/api/v1/onboard/download"  --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"


    any clues/suggestions?

    1. it's probably caused by port 32080 has already been occupied by kube-proxy.

      you can either try using direct ip address, or using port-forwarding as a workaround. pls refer to my post: https://blog.csdn.net/jeffyko/article/details/107426626

      1. Anonymous

        Thanks a lot, it resolved the issue and we are able to proceed further,

        After executing the below command we are seeing the below logs. Is this the correct behavior?? In case this is the error, how to resolve this??

        $ kubectl logs -n ricxapp <name of POD retrieved from statement above>

        Error from server (BadRequest): container "hwxapp" in pod "ricxapp-hwxapp-684d8d675b-xz99d" is waiting to start: ContainerCreating

        1. make sure hwxapp Pod is running. It needs pull the 'hwxapp' docker image, which may take some time.

          kubectl get pods -n ricxapp | grep hwxapp

      2. Anonymous

        Hi Zhengwei, I have tried the solution in your blog, but I still receive the same error: AttributeError:  'ConnectionError' object has no attribute 'message'. Do you have any clues? Thanks.

      3. EDIT: Nevermind, silly me.  For anyone wondering, the port forwarding command needs to keep running for the forwarding to stay active.  So i just run it in a Screen tab keep it running there, and run the curl commands in a different tab

        Hi, I've tried this workaround but the port forwarding command just hangs and never completes.  Anyone experiencing the same issue? Kube cluster and pods seem healthy:

        root@ubuntu1804:~# kubectl -v=4 port-forward r4-infrastructure-kong-6c7f6db759-kkjtt 32088:32080 -n ricplt

        Forwarding from 127.0.0.1:32088 -> 32080

        (hangs after the previous message)



  5. Anonymous

    In my case, the script only works with helm 2.x. how about others? 

  6. THANK You! This is great!

  7. Anonymous

    Hi all,

    I use the source code to build the hw_unit_test container and I execute the hw_unit_test, it stops and returns "SHAREDDATALAYER_ABORT, file src/redis/asynchirediscommanddispatcher.cpp, line 206: Required Redis module extension commands not available.
    Aborted (core dumped)".

    Thus, I can't tset my hwxapp.

    any clues/suggestions? Thanks!

      

  8. Anonymous

    Hi all,

    There is a pod crash every time. When I run the following command, I got one crashed pod. Did you meet the same case? Thanks so much for giving me some suggestions.

    root@chuanhao-HP-EliteDesk-800-G4-SFF:~/dep/bin# kubectl get pods -n ricplt

    NAME READY STATUS RESTARTS AGE

    deployment-ricplt-a1mediator-66fcf76c66-f6kbh 1/1 Running 1 2m16s

    deployment-ricplt-alarmadapter-64d559f769-twfk7 1/1 Running 0 46s

    deployment-ricplt-appmgr-6fd6664755-7rs4g 1/1 Running 0 3m49s

    deployment-ricplt-e2mgr-8479fb5ff8-j9nzf 1/1 Running 0 3m

    deployment-ricplt-e2term-alpha-bcb457df4-r22nb 1/1 Running 0 2m39s

    deployment-ricplt-jaegeradapter-84558d855b-xfgd5 1/1 Running 0 78s

    deployment-ricplt-o1mediator-d8b9fcdf-tpz7v 1/1 Running 0 64s

    deployment-ricplt-rtmgr-9d4847788-scrxf 1/1 Running 1 3m26s

    deployment-ricplt-submgr-65dc9f4995-knzjd 1/1 Running 0 113s

    deployment-ricplt-vespamgr-7458d9b5d-mdmjx 1/1 Running 0 96s

    deployment-ricplt-xapp-onboarder-546b86b5c4-z2qd6 2/2 Running 0 4m16s

    r4-infrastructure-kong-6c7f6db759-44wdx 1/2 CrashLoopBackOff 6 4m52s

    r4-infrastructure-prometheus-alertmanager-75dff54776-qlp4g 2/2 Running 0 4m52s

    r4-infrastructure-prometheus-server-5fd7695-lr6z7 1/1 Running 0 4m52s

    statefulset-ricplt-dbaas-server-0 1/1 Running 0 4m33s


  9. Hi All,

    I am trying to deploy RIC and at step 4 while run command "./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml". I am getting this below error:

    root@ubuntu:~/dep/bin# ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml
    Error: unknown command "home" for "helm"
    Run 'helm --help' for usage.
    Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Successfully packaged chart and saved it to: /tmp/aux-common-3.0.0.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Successfully packaged chart and saved it to: /tmp/nonrtric-common-2.0.0.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Error: open /repository/local/index.yaml822716896: no such file or directory
    Error: no repositories configured
    Error: looks like "http://127.0.0.1:8879/charts" is not a valid chart repository or cannot be reached: Get "http://127.0.0.1:8879/charts/index.yaml": dial tcp 127.0.0.1:8879: connect: connection refused.

    I am unable to resolve this issue. Could any one please help to resolve this.

    Thanks,

    1. I had a different error but maybe it helps, I updated in the example_recipe.yaml file the helm version to 2.17.0.

      I had this error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached:

    2. Try this . it might help
      helm
      init --client-only --skip-refresh helm repo rm stable helm repo add stable https://charts.helm.sh/
  10. Hey all,

    I am trying to deploy the RIC in it's Cherry-Release Version.

    But i am facing some issues:

    a) 

    The r4-alarmadater isnt getting released

    End of deploy-ric-platform script
    Error: release r4-alarmadapter failed: configmaps "configmap-ricplt-alarmadapter-appconfig" already exists
    root@max-near-rt-ric-cherry:/home/ubuntu/dep/bin# helm ls --all r4-alarmadapter
    + helm ls --all r4-alarmadapter
    NAME   REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
    r4-alarmadapter  1 Mon Jan 11 09:14:08 2021 FAILED alarmadapter-3.0.0 1.0 ricplt

    b)

    The pods aren't getting to their Running state. This might be connected to issue a) but I am not sure.

    kubectl get pods --all-namspaces:

    NAMESPACE     NAME                                                         READY   STATUS              RESTARTS   AGE
    kube-system   coredns-5644d7b6d9-794qq                                     1/1     Running             1          29m
    kube-system   coredns-5644d7b6d9-ph6tt                                     1/1     Running             1          29m
    kube-system   etcd-max-near-rt-ric-cherry                                  1/1     Running             1          28m
    kube-system   kube-apiserver-max-near-rt-ric-cherry                        1/1     Running             1          28m
    kube-system   kube-controller-manager-max-near-rt-ric-cherry               1/1     Running             1          28m
    kube-system   kube-flannel-ds-ljz7w                                        1/1     Running             1          29m
    kube-system   kube-proxy-cdkf4                                             1/1     Running             1          29m
    kube-system   kube-scheduler-max-near-rt-ric-cherry                        1/1     Running             1          28m
    kube-system   tiller-deploy-68bf6dff8f-xfkwd                               1/1     Running             1          27m
    ricinfra      deployment-tiller-ricxapp-d4f98ff65-wwbhx                    0/1     ContainerCreating   0          25m
    ricinfra      tiller-secret-generator-2nsb2                                0/1     ContainerCreating   0          25m
    ricplt        deployment-ricplt-a1mediator-66fcf76c66-njphv                0/1     ContainerCreating   0          23m
    ricplt        deployment-ricplt-appmgr-6fd6664755-r577d                    0/1     Init:0/1            0          24m
    ricplt        deployment-ricplt-e2mgr-6dfb6c4988-tb26k                     0/1     ContainerCreating   0          24m
    ricplt        deployment-ricplt-e2term-alpha-64965c46c6-5d59x              0/1     ContainerCreating   0          23m
    ricplt        deployment-ricplt-jaegeradapter-76ddbf9c9-fw4sh              0/1     ContainerCreating   0          22m
    ricplt        deployment-ricplt-o1mediator-d8b9fcdf-86qgg                  0/1     ContainerCreating   0          22m
    ricplt        deployment-ricplt-rtmgr-6d559897d8-jvbsb                     0/1     ContainerCreating   0          24m
    ricplt        deployment-ricplt-submgr-65bcd95469-nc8pq                    0/1     ContainerCreating   0          23m
    ricplt        deployment-ricplt-vespamgr-7458d9b5d-xlt8m                   0/1     ContainerCreating   0          22m
    ricplt        deployment-ricplt-xapp-onboarder-5958856fc8-kw9jx            0/2     ContainerCreating   0          25m
    ricplt        r4-infrastructure-kong-6c7f6db759-q5psw                      0/2     ContainerCreating   0          25m
    ricplt        r4-infrastructure-prometheus-alertmanager-75dff54776-mb8hn   0/2     ContainerCreating   0          25m
    ricplt        r4-infrastructure-prometheus-server-5fd7695-bvk74            1/1     Running             0          25m
    ricplt        statefulset-ricplt-dbaas-server-0                            0/1     ContainerCreating   0          25m

    Thanks in advance!
  11. Hi All,

    I am trying to install rmr_nng library as a requirement of ric-app-kpimon  xapp.

    https://github.com/o-ran-sc/ric-plt-lib-rmr

    https://github.com/nanomsg/nng

    but getting below error(snippet):

    -- Installing: /root/ric-plt-lib-rmr/.build/lib/cmake/nng/nng-config-version.cmake
    -- Installing: /root/ric-plt-lib-rmr/.build/bin/nngcat
    -- Set runtime path of "/root/ric-plt-lib-rmr/.build/bin/nngcat" to "/root/ric-plt-lib-rmr/.build/lib"
    [ 40%] No test step for 'ext_nng'
    [ 41%] Completed 'ext_nng'
    [ 41%] Built target ext_nng
    Scanning dependencies of target nng_objects
    [ 43%] Building C object src/rmr/nng/CMakeFiles/nng_objects.dir/src/rmr_nng.c.o
    In file included from /root/ric-plt-lib-rmr/src/rmr/nng/src/rmr_nng.c:70:0:
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c: In function ‘roll_tables’:
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:406:28: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtgate’; did you mean ‘rtable’?
    pthread_mutex_lock( ctx->rtgate ); // must hold lock to move to active
    ^~~~~~
    rtable
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:409:30: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtgate’; did you mean ‘rtable’?
    pthread_mutex_unlock( ctx->rtgate );
    ^~~~~~
    rtable
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c: In function ‘parse_rt_rec’:
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:858:12: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtable_ready’; did you mean ‘rmr_ready’?
    ctx->rtable_ready = 1; // route based sends can now happen
    ^~~~~~~~~~~~
    rmr_ready

    ...

    ...

    I am unable to resolve this issue. Could any one please help to resolve this.

    Thanks,


  12. Hi All, 

    I am trying to get this up and running (first time ever).  

    For anyone interested, just wanted to highlight that if Step 4 fails with the following:

    root@ubuntu1804:~/dep/bin# ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml
    Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz
    Successfully packaged chart and saved it to: /tmp/aux-common-3.0.0.tgz
    Successfully packaged chart and saved it to: /tmp/nonrtric-common-2.0.0.tgz
    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
    ****************************************************************************************************************
                                                         ERROR                                                      
    ****************************************************************************************************************
    Can't locate the ric-common helm package in the local repo. Please make sure that it is properly installed.
    ****************************************************************************************************************

    You just need to initialize Helm repositories with the following (disregard the suggestion in the above error output as it's deprecated from Nov 2020):

    helm init --stable-repo-url=https://charts.helm.sh/stable --client-only 

    and then run Step 4.



  13. Anonymous

    Hi,

    We are trying to compile the pods on Arm and test them on Arm-based platform. We are failing for xapp-onboarder, ric-plt-e, ric-plt-alarmadapter. Has anyone tried it running the o-ran ric on Arm? Could someone point us to the recipe to build the images from the source? 

Write a comment…