Skip to end of metadata
Go to start of metadata
VM Minimum Requirements for RIC 22

NOTE: sudo access is required for installation

Getting Started PDF

Step 1: Obtaining the Deployment Scripts and Charts

Run ...

$ sudo -i

$ git clone http://gerrit.o-ran-sc.org/r/it/dep -b bronze

$ cd dep
$ git submodule update --init --recursive --remote
Step 2: Generation of cloud-init Script 

Run ...

$ cd tools/k8s/bin
./gen-cloud-init.sh   # will generate the stack install script for what RIC needs

Note: The outputted script is will be used for preparing K8 cluster for RIC deployment (k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh)

Step 3: Installation of Kubernetes, Helm, Docker, etc.

Run ...

$ ./k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh

NOTE: Be patient as this takes some time to complete. Upon completion of this script, the VM will be rebooted.  You will then need to login to the VM and run sudo once again.

$ sudo -i

$ kubectl get pods --all-namespaces  # There should be  9 pods running in kube-system namespace.

Step 4:  Deploy RIC using Recipe

Run ...

$ cd dep/bin
$ ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml
$ kubectl get pods -n ricplt   # There should be ~16 pods running in the ricplt namespace.  


Step 5:  Onboarding a Test xAPP (HelloWorld xApp)

NOTE: If using a version less than Ubuntu 18.04 this section will fail!
Run...

$ cd dep

# Create the file that will contain the URL used to start the on-boarding process...
$ echo '{ "config-file.json_url": "https://gerrit.o-ran-sc.org/r/gitweb?p=ric-app/hw.git;a=blob_plain;f=init/config-file.json;hb=HEAD" }' > onboard.hw.url

# Start on-boarding process...

curl --location --request POST "http://$(hostname):32080/onboard/api/v1/onboard/download"  --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"


# Verify list of all on-boarded xApps...
$ curl --location --request GET "http://$(hostname):32080/onboard/api/v1/charts"
Step 6:  Deploy Test xApp (HelloWorld xApp)

Run..

#  Verify xApp is not running...  This may take a minute so refresh the command below

kubectl get pods -n ricxapp


# Call xApp Manager to deploy HelloWorld xApp...

curl --location --request POST "http://$(hostname):32080/appmgr/ric/v1/xapps"  --header 'Content-Type: application/json'  --data-raw '{"xappName": "hwxapp"}'


#  Verify xApp is running...

kubectl get pods -n ricxapp


#  View logs...

$ kubectl logs -n ricxapp <name of POD retrieved from statement above>


Helpful Hints

Kubectl commads:

kubectl get pods -n nampespace - gets a list of Pods running

kubectl get logs -n namespace name_of_running_pod






Complete these tasks to get started


Recent space activity

Space contributors

{"mode":"list","scope":"descendants","limit":"5","showLastTime":"true","order":"update","contextEntityId":20875202}

  • No labels

22 Comments

  1. Thanks for the guidelines. It works well.

  2. Hi experts,

    after deployment, i have only 15 pods in ns ricplt, is it normal?

    The "Step 4" says: "kubectl get pods -n ricplt   # There should be ~16 pods running in the ricplt namespace.".

    (18:11 dabs@ricpltbronze bin) > sudo kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5644d7b6d9-clvtf 1/1 Running 5 32h
    kube-system coredns-5644d7b6d9-jwxnm 1/1 Running 5 32h
    kube-system etcd-ricpltbronze 1/1 Running 11 32h
    kube-system kube-apiserver-ricpltbronze 1/1 Running 28 32h
    kube-system kube-controller-manager-ricpltbronze 1/1 Running 9 32h
    kube-system kube-flannel-ds-amd64-mrwn2 1/1 Running 16 32h
    kube-system kube-proxy-zrtl8 1/1 Running 6 32h
    kube-system kube-scheduler-ricpltbronze 1/1 Running 8 32h
    kube-system tiller-deploy-68bf6dff8f-wbmwl 1/1 Running 4 32h
    ricinfra deployment-tiller-ricxapp-d4f98ff65-6h4n4 1/1 Running 0 3h13m
    ricinfra tiller-secret-generator-tgkzf 0/1 Completed 0 132m
    ricinfra tiller-secret-generator-zcx72 0/1 Error 0 3h13m
    ricplt deployment-ricplt-a1mediator-66fcf76c66-h6rp2 1/1 Running 0 40m
    ricplt deployment-ricplt-alarmadapter-64d559f769-glb5z 1/1 Running 0 30m
    ricplt deployment-ricplt-appmgr-6fd6664755-2mxjb 1/1 Running 0 42m
    ricplt deployment-ricplt-e2mgr-8479fb5ff8-9zqbp 1/1 Running 0 41m
    ricplt deployment-ricplt-e2term-alpha-bcb457df4-4dz62 1/1 Running 0 40m
    ricplt deployment-ricplt-jaegeradapter-84558d855b-tmqqb 1/1 Running 0 39m
    ricplt deployment-ricplt-o1mediator-d8b9fcdf-f4sgm 1/1 Running 0 34m
    ricplt deployment-ricplt-rtmgr-9d4847788-kf6r4 1/1 Running 10 41m
    ricplt deployment-ricplt-submgr-65dc9f4995-gt5kb 1/1 Running 0 40m
    ricplt deployment-ricplt-vespamgr-7458d9b5d-klh9l 1/1 Running 0 39m
    ricplt deployment-ricplt-xapp-onboarder-546b86b5c4-xkcpt 2/2 Running 0 42m
    ricplt r4-infrastructure-kong-6c7f6db759-7xjqm 2/2 Running 21 3h13m
    ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-jfkdg 2/2 Running 2 3h13m
    ricplt r4-infrastructure-prometheus-server-5fd7695-pprg2 1/1 Running 2 3h13m
    ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 43m

    1. Anonymous

      After deployment I'm also getting only 15 pods running. Is it normal or do I need to worry about that?

      1. it's normal, and you are good to go. just try policy management.

  3. Hi,

    Even though deployment of RIC plt did not give any errors, most of the RIC plt and infra pods remain in error state. 

    kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5644d7b6d9-dz774 1/1 Running 1 10h
    kube-system coredns-5644d7b6d9-fp586 1/1 Running 1 10h
    kube-system etcd-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-apiserver-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-controller-manager-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-flannel-ds-amd64-b4l97 1/1 Running 1 10h
    kube-system kube-proxy-fxfrk 1/1 Running 1 10h
    kube-system kube-scheduler-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system tiller-deploy-68bf6dff8f-jvtk7 1/1 Running 1 10h
    ricinfra deployment-tiller-ricxapp-d4f98ff65-hcnqf 0/1 ContainerCreating 0 9h
    ricinfra tiller-secret-generator-d7kmk 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-a1mediator-66fcf76c66-l2z7m 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-alarmadapter-64d559f769-7d8fq 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-appmgr-6fd6664755-7lp8q 0/1 Init:ImagePullBackOff 0 9h
    ricplt deployment-ricplt-e2mgr-8479fb5ff8-ggnx8 0/1 ErrImagePull 0 9h
    ricplt deployment-ricplt-e2term-alpha-bcb457df4-dkdbc 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-jaegeradapter-84558d855b-bpzcv 1/1 Running 0 9h
    ricplt deployment-ricplt-o1mediator-d8b9fcdf-5ptcs 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-rtmgr-9d4847788-rvnrx 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-submgr-65dc9f4995-cbhvc 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-vespamgr-7458d9b5d-bkzpg 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-xapp-onboarder-546b86b5c4-g2dnt 1/2 ImagePullBackOff 0 9h
    ricplt r4-infrastructure-kong-6c7f6db759-4czbj 2/2 Running 2 9h
    ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-gwxzz 2/2 Running 0 9h
    ricplt r4-infrastructure-prometheus-server-5fd7695-phcqs 1/1 Running 0 9h
    ricplt statefulset-ricplt-dbaas-server-0 0/1 ImagePullBackOff 0 9h


    any clue?

    1. Anonymous

      The ImagePullBackOff and ErrrImagePull errors are all for container images built from O-RAN SC code.  It appears that there is a problem for the docker engine in your installation fetching images from the O-RAN SC docker registry.  Oftentimes this is due to local firewall blocking such connections.

      You may want to try:  docker pull nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-a1:2.1.9 to see if your docker engine can retrieve this image. 

      1. Thanks for the information.

        I think you are right, even docker pull nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-a1:2.1.9 is giving an error of connection refused..

        although I am checking for the corporate firewall to disable it, the strange thing is docker pull hello-world and other such pulls are working fine.

        1. "Connection refused" error does suggest network connection problem.

          These O-RAN SC docker registries use port 10001 ~ 10004, in particular 10002 for all released docker images.  They are not the default docker registry of 5000.  It is possible that your local firewall has a rule allowing outgoing connections into port 5000, but not these ports.

          1. Anonymous

            Thanks for the explanation. you are right these ports were blocked on firewall, working fine now.
            currently i am stuck with opening o1 dashboard, trying it on smo machine.

          2. thanks for the explanation. you are right these particular ports were getting blocked by firewall, got it working now.

            currently i am stuck in opening o1 dashboard, trying it on smo machine.

        2. Hey!

          I am currently trying to deploy the platform and am facing the same issue with the images. Did opening up the firewall on 10002-10004 help you?

  4. Anonymous

    Hi all,

    after executing the below POST command. Iam not getting any response.

    # Start on-boarding process...

    curl --location --request POST "http://$(hostname):32080/onboard/api/v1/onboard/download"  --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"


    any clues/suggestions?

    1. it's probably caused by port 32080 has already been occupied by kube-proxy.

      you can either try using direct ip address, or using port-forwarding as a workaround. pls refer to my post: https://blog.csdn.net/jeffyko/article/details/107426626

      1. Anonymous

        Thanks a lot, it resolved the issue and we are able to proceed further,

        After executing the below command we are seeing the below logs. Is this the correct behavior?? In case this is the error, how to resolve this??

        $ kubectl logs -n ricxapp <name of POD retrieved from statement above>

        Error from server (BadRequest): container "hwxapp" in pod "ricxapp-hwxapp-684d8d675b-xz99d" is waiting to start: ContainerCreating

        1. make sure hwxapp Pod is running. It needs pull the 'hwxapp' docker image, which may take some time.

          kubectl get pods -n ricxapp | grep hwxapp

      2. Anonymous

        Hi Zhengwei, I have tried the solution in your blog, but I still receive the same error: AttributeError:  'ConnectionError' object has no attribute 'message'. Do you have any clues? Thanks.

  5. Anonymous

    In my case, the script only works with helm 2.x. how about others? 

  6. THANK You! This is great!

  7. Anonymous

    Hi all,

    I use the source code to build the hw_unit_test container and I execute the hw_unit_test, it stops and returns "SHAREDDATALAYER_ABORT, file src/redis/asynchirediscommanddispatcher.cpp, line 206: Required Redis module extension commands not available.
    Aborted (core dumped)".

    Thus, I can't tset my hwxapp.

    any clues/suggestions? Thanks!

      

  8. Anonymous

    Hi all,

    There is a pod crash every time. When I run the following command, I got one crashed pod. Did you meet the same case? Thanks so much for giving me some suggestions.

    root@chuanhao-HP-EliteDesk-800-G4-SFF:~/dep/bin# kubectl get pods -n ricplt

    NAME READY STATUS RESTARTS AGE

    deployment-ricplt-a1mediator-66fcf76c66-f6kbh 1/1 Running 1 2m16s

    deployment-ricplt-alarmadapter-64d559f769-twfk7 1/1 Running 0 46s

    deployment-ricplt-appmgr-6fd6664755-7rs4g 1/1 Running 0 3m49s

    deployment-ricplt-e2mgr-8479fb5ff8-j9nzf 1/1 Running 0 3m

    deployment-ricplt-e2term-alpha-bcb457df4-r22nb 1/1 Running 0 2m39s

    deployment-ricplt-jaegeradapter-84558d855b-xfgd5 1/1 Running 0 78s

    deployment-ricplt-o1mediator-d8b9fcdf-tpz7v 1/1 Running 0 64s

    deployment-ricplt-rtmgr-9d4847788-scrxf 1/1 Running 1 3m26s

    deployment-ricplt-submgr-65dc9f4995-knzjd 1/1 Running 0 113s

    deployment-ricplt-vespamgr-7458d9b5d-mdmjx 1/1 Running 0 96s

    deployment-ricplt-xapp-onboarder-546b86b5c4-z2qd6 2/2 Running 0 4m16s

    r4-infrastructure-kong-6c7f6db759-44wdx 1/2 CrashLoopBackOff 6 4m52s

    r4-infrastructure-prometheus-alertmanager-75dff54776-qlp4g 2/2 Running 0 4m52s

    r4-infrastructure-prometheus-server-5fd7695-lr6z7 1/1 Running 0 4m52s

    statefulset-ricplt-dbaas-server-0 1/1 Running 0 4m33s


  9. Hey all,


    I have currently a couple issues with my RICPLT deployment

    1. When I deploy the RIC platform using the example_recipe.yaml file, the process crashes at the end when alarmadapter is being deployed, with

      Error: release r4-alarmadapter failed: configmaps "configmap-ricplt-alarmadapter-appconfig" already exists

    2. The routing manager version 0.6.3 keeps respawning

      ricplt deployment-ricplt-rtmgr-86d8c694f8-lsxlz 0/1 CrashLoopBackOff 4 5m44s

    Any help would be greatly appreciated




  10. Hi All,

    I am trying to deploy RIC and at step 4 while run command "./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml". I am getting this below error:

    root@ubuntu:~/dep/bin# ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml
    Error: unknown command "home" for "helm"
    Run 'helm --help' for usage.
    Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Successfully packaged chart and saved it to: /tmp/aux-common-3.0.0.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Successfully packaged chart and saved it to: /tmp/nonrtric-common-2.0.0.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Error: open /repository/local/index.yaml822716896: no such file or directory
    Error: no repositories configured
    Error: looks like "http://127.0.0.1:8879/charts" is not a valid chart repository or cannot be reached: Get "http://127.0.0.1:8879/charts/index.yaml": dial tcp 127.0.0.1:8879: connect: connection refused.

    I am unable to resolve this issue. Could any one please help to resolve this.

    Thanks,

Write a comment…