Skip to end of metadata
Go to start of metadata
VM Minimum Requirements for RIC 22

NOTE: sudo access is required for installation

Getting Started PDF

Step 1: Obtaining the Deployment Scripts and Charts

Run ...

$ sudo -i

$ git clone http://gerrit.o-ran-sc.org/r/it/dep -b bronze

$ cd dep
$ git submodule update --init --recursive --remote
Step 2: Generation of cloud-init Script 

Run ...

$ cd tools/k8s/bin
./gen-cloud-init.sh   # will generate the stack install script for what RIC needs

Note: The outputted script is will be used for preparing K8 cluster for RIC deployment (k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh)

Step 3: Installation of Kubernetes, Helm, Docker, etc.

Run ...

$ ./k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh

NOTE: Be patient as this takes some time to complete. Upon completion of this script, the VM will be rebooted.  You will then need to login to the VM and run sudo once again.

$ sudo -i

$ kubectl get pods --all-namespaces  # There should be  9 pods running in kube-system namespace.

Step 4:  Deploy RIC using Recipe

Run ...

$ cd dep/bin
$ ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml
$ kubectl get pods -n ricplt   # There should be ~16 pods running in the ricplt namespace.  


Step 5:  Onboarding a Test xAPP (HelloWorld xApp)

NOTE: If using a version less than Ubuntu 18.04 this section will fail!
Run...

$ cd dep

# Create the file that will contain the URL used to start the on-boarding process...
$ echo '{ "config-file.json_url": "https://gerrit.o-ran-sc.org/r/gitweb?p=ric-app/hw.git;a=blob_plain;f=init/config-file.json;hb=HEAD" }' > onboard.hw.url

# Start on-boarding process...

curl --location --request POST "http://$(hostname):32080/onboard/api/v1/onboard/download"  --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"


# Verify list of all on-boarded xApps...
$ curl --location --request GET "http://$(hostname):32080/onboard/api/v1/charts"
Step 6:  Deploy Test xApp (HelloWorld xApp)

Run..

#  Verify xApp is not running...  This may take a minute so refresh the command below

kubectl get pods -n ricxapp


# Call xApp Manager to deploy HelloWorld xApp...

curl --location --request POST "http://$(hostname):32080/appmgr/ric/v1/xapps"  --header 'Content-Type: application/json'  --data-raw '{"xappName": "hwxapp"}'


#  Verify xApp is running...

kubectl get pods -n ricxapp


#  View logs...

$ kubectl logs -n ricxapp <name of POD retrieved from statement above>


Helpful Hints

Kubectl commads:

kubectl get pods -n nampespace - gets a list of Pods running

kubectl get logs -n namespace name_of_running_pod






Complete these tasks to get started


Recent space activity

Space contributors

{"mode":"list","scope":"descendants","limit":"5","showLastTime":"true","order":"update","contextEntityId":20875202}

  • No labels

87 Comments

  1. Thanks for the guidelines. It works well.

  2. Hi experts,

    after deployment, i have only 15 pods in ns ricplt, is it normal?

    The "Step 4" says: "kubectl get pods -n ricplt   # There should be ~16 pods running in the ricplt namespace.".

    (18:11 dabs@ricpltbronze bin) > sudo kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5644d7b6d9-clvtf 1/1 Running 5 32h
    kube-system coredns-5644d7b6d9-jwxnm 1/1 Running 5 32h
    kube-system etcd-ricpltbronze 1/1 Running 11 32h
    kube-system kube-apiserver-ricpltbronze 1/1 Running 28 32h
    kube-system kube-controller-manager-ricpltbronze 1/1 Running 9 32h
    kube-system kube-flannel-ds-amd64-mrwn2 1/1 Running 16 32h
    kube-system kube-proxy-zrtl8 1/1 Running 6 32h
    kube-system kube-scheduler-ricpltbronze 1/1 Running 8 32h
    kube-system tiller-deploy-68bf6dff8f-wbmwl 1/1 Running 4 32h
    ricinfra deployment-tiller-ricxapp-d4f98ff65-6h4n4 1/1 Running 0 3h13m
    ricinfra tiller-secret-generator-tgkzf 0/1 Completed 0 132m
    ricinfra tiller-secret-generator-zcx72 0/1 Error 0 3h13m
    ricplt deployment-ricplt-a1mediator-66fcf76c66-h6rp2 1/1 Running 0 40m
    ricplt deployment-ricplt-alarmadapter-64d559f769-glb5z 1/1 Running 0 30m
    ricplt deployment-ricplt-appmgr-6fd6664755-2mxjb 1/1 Running 0 42m
    ricplt deployment-ricplt-e2mgr-8479fb5ff8-9zqbp 1/1 Running 0 41m
    ricplt deployment-ricplt-e2term-alpha-bcb457df4-4dz62 1/1 Running 0 40m
    ricplt deployment-ricplt-jaegeradapter-84558d855b-tmqqb 1/1 Running 0 39m
    ricplt deployment-ricplt-o1mediator-d8b9fcdf-f4sgm 1/1 Running 0 34m
    ricplt deployment-ricplt-rtmgr-9d4847788-kf6r4 1/1 Running 10 41m
    ricplt deployment-ricplt-submgr-65dc9f4995-gt5kb 1/1 Running 0 40m
    ricplt deployment-ricplt-vespamgr-7458d9b5d-klh9l 1/1 Running 0 39m
    ricplt deployment-ricplt-xapp-onboarder-546b86b5c4-xkcpt 2/2 Running 0 42m
    ricplt r4-infrastructure-kong-6c7f6db759-7xjqm 2/2 Running 21 3h13m
    ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-jfkdg 2/2 Running 2 3h13m
    ricplt r4-infrastructure-prometheus-server-5fd7695-pprg2 1/1 Running 2 3h13m
    ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 43m

    1. Anonymous

      After deployment I'm also getting only 15 pods running. Is it normal or do I need to worry about that?

      1. it's normal, and you are good to go. just try policy management.

  3. Hi,

    Even though deployment of RIC plt did not give any errors, most of the RIC plt and infra pods remain in error state. 

    kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5644d7b6d9-dz774 1/1 Running 1 10h
    kube-system coredns-5644d7b6d9-fp586 1/1 Running 1 10h
    kube-system etcd-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-apiserver-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-controller-manager-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-flannel-ds-amd64-b4l97 1/1 Running 1 10h
    kube-system kube-proxy-fxfrk 1/1 Running 1 10h
    kube-system kube-scheduler-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system tiller-deploy-68bf6dff8f-jvtk7 1/1 Running 1 10h
    ricinfra deployment-tiller-ricxapp-d4f98ff65-hcnqf 0/1 ContainerCreating 0 9h
    ricinfra tiller-secret-generator-d7kmk 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-a1mediator-66fcf76c66-l2z7m 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-alarmadapter-64d559f769-7d8fq 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-appmgr-6fd6664755-7lp8q 0/1 Init:ImagePullBackOff 0 9h
    ricplt deployment-ricplt-e2mgr-8479fb5ff8-ggnx8 0/1 ErrImagePull 0 9h
    ricplt deployment-ricplt-e2term-alpha-bcb457df4-dkdbc 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-jaegeradapter-84558d855b-bpzcv 1/1 Running 0 9h
    ricplt deployment-ricplt-o1mediator-d8b9fcdf-5ptcs 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-rtmgr-9d4847788-rvnrx 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-submgr-65dc9f4995-cbhvc 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-vespamgr-7458d9b5d-bkzpg 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-xapp-onboarder-546b86b5c4-g2dnt 1/2 ImagePullBackOff 0 9h
    ricplt r4-infrastructure-kong-6c7f6db759-4czbj 2/2 Running 2 9h
    ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-gwxzz 2/2 Running 0 9h
    ricplt r4-infrastructure-prometheus-server-5fd7695-phcqs 1/1 Running 0 9h
    ricplt statefulset-ricplt-dbaas-server-0 0/1 ImagePullBackOff 0 9h


    any clue?

    1. Anonymous

      The ImagePullBackOff and ErrrImagePull errors are all for container images built from O-RAN SC code.  It appears that there is a problem for the docker engine in your installation fetching images from the O-RAN SC docker registry.  Oftentimes this is due to local firewall blocking such connections.

      You may want to try:  docker pull nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-a1:2.1.9 to see if your docker engine can retrieve this image. 

      1. Thanks for the information.

        I think you are right, even docker pull nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-a1:2.1.9 is giving an error of connection refused..

        although I am checking for the corporate firewall to disable it, the strange thing is docker pull hello-world and other such pulls are working fine.

        1. "Connection refused" error does suggest network connection problem.

          These O-RAN SC docker registries use port 10001 ~ 10004, in particular 10002 for all released docker images.  They are not the default docker registry of 5000.  It is possible that your local firewall has a rule allowing outgoing connections into port 5000, but not these ports.

          1. Anonymous

            Thanks for the explanation. you are right these ports were blocked on firewall, working fine now.
            currently i am stuck with opening o1 dashboard, trying it on smo machine.

          2. thanks for the explanation. you are right these particular ports were getting blocked by firewall, got it working now.

            currently i am stuck in opening o1 dashboard, trying it on smo machine.

            1. Hi Team,


              Can you please share the link so that i am able to download the docker image manually and then load the same. I am also getting connection refused but not able to solve this thing. Meanwhile plz share the link so that it will help for me.


              Br

              Deepak

  4. Anonymous

    Hi all,

    after executing the below POST command. Iam not getting any response.

    # Start on-boarding process...

    curl --location --request POST "http://$(hostname):32080/onboard/api/v1/onboard/download"  --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"


    any clues/suggestions?

    1. You are not logged in. Any changes you make will be marked as anonymous. You may want to Log In if you already have an account.
    1. it's probably caused by port 32080 has already been occupied by kube-proxy.

      you can either try using direct ip address, or using port-forwarding as a workaround. pls refer to my post: https://blog.csdn.net/jeffyko/article/details/107426626

      1. Anonymous

        Thanks a lot, it resolved the issue and we are able to proceed further,

        After executing the below command we are seeing the below logs. Is this the correct behavior?? In case this is the error, how to resolve this??

        $ kubectl logs -n ricxapp <name of POD retrieved from statement above>

        Error from server (BadRequest): container "hwxapp" in pod "ricxapp-hwxapp-684d8d675b-xz99d" is waiting to start: ContainerCreating

        1. make sure hwxapp Pod is running. It needs pull the 'hwxapp' docker image, which may take some time.

          kubectl get pods -n ricxapp | grep hwxapp

      2. Anonymous

        Hi Zhengwei, I have tried the solution in your blog, but I still receive the same error: AttributeError:  'ConnectionError' object has no attribute 'message'. Do you have any clues? Thanks.

      3. EDIT: Nevermind, silly me.  For anyone wondering, the port forwarding command needs to keep running for the forwarding to stay active.  So i just run it in a Screen tab keep it running there, and run the curl commands in a different tab

        Hi, I've tried this workaround but the port forwarding command just hangs and never completes.  Anyone experiencing the same issue? Kube cluster and pods seem healthy:

        root@ubuntu1804:~# kubectl -v=4 port-forward r4-infrastructure-kong-6c7f6db759-kkjtt 32088:32080 -n ricplt

        Forwarding from 127.0.0.1:32088 -> 32080

        (hangs after the previous message)



  5. Anonymous

    In my case, the script only works with helm 2.x. how about others? 

  6. THANK You! This is great!

  7. Anonymous

    Hi all,

    I use the source code to build the hw_unit_test container and I execute the hw_unit_test, it stops and returns "SHAREDDATALAYER_ABORT, file src/redis/asynchirediscommanddispatcher.cpp, line 206: Required Redis module extension commands not available.
    Aborted (core dumped)".

    Thus, I can't tset my hwxapp.

    any clues/suggestions? Thanks!

      

  8. Anonymous

    Hi all,

    There is a pod crash every time. When I run the following command, I got one crashed pod. Did you meet the same case? Thanks so much for giving me some suggestions.

    root@chuanhao-HP-EliteDesk-800-G4-SFF:~/dep/bin# kubectl get pods -n ricplt

    NAME READY STATUS RESTARTS AGE

    deployment-ricplt-a1mediator-66fcf76c66-f6kbh 1/1 Running 1 2m16s

    deployment-ricplt-alarmadapter-64d559f769-twfk7 1/1 Running 0 46s

    deployment-ricplt-appmgr-6fd6664755-7rs4g 1/1 Running 0 3m49s

    deployment-ricplt-e2mgr-8479fb5ff8-j9nzf 1/1 Running 0 3m

    deployment-ricplt-e2term-alpha-bcb457df4-r22nb 1/1 Running 0 2m39s

    deployment-ricplt-jaegeradapter-84558d855b-xfgd5 1/1 Running 0 78s

    deployment-ricplt-o1mediator-d8b9fcdf-tpz7v 1/1 Running 0 64s

    deployment-ricplt-rtmgr-9d4847788-scrxf 1/1 Running 1 3m26s

    deployment-ricplt-submgr-65dc9f4995-knzjd 1/1 Running 0 113s

    deployment-ricplt-vespamgr-7458d9b5d-mdmjx 1/1 Running 0 96s

    deployment-ricplt-xapp-onboarder-546b86b5c4-z2qd6 2/2 Running 0 4m16s

    r4-infrastructure-kong-6c7f6db759-44wdx 1/2 CrashLoopBackOff 6 4m52s

    r4-infrastructure-prometheus-alertmanager-75dff54776-qlp4g 2/2 Running 0 4m52s

    r4-infrastructure-prometheus-server-5fd7695-lr6z7 1/1 Running 0 4m52s

    statefulset-ricplt-dbaas-server-0 1/1 Running 0 4m33s


  9. Hi All,

    I am trying to deploy RIC and at step 4 while run command "./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml". I am getting this below error:

    root@ubuntu:~/dep/bin# ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml
    Error: unknown command "home" for "helm"
    Run 'helm --help' for usage.
    Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Successfully packaged chart and saved it to: /tmp/aux-common-3.0.0.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Successfully packaged chart and saved it to: /tmp/nonrtric-common-2.0.0.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Error: open /repository/local/index.yaml822716896: no such file or directory
    Error: no repositories configured
    Error: looks like "http://127.0.0.1:8879/charts" is not a valid chart repository or cannot be reached: Get "http://127.0.0.1:8879/charts/index.yaml": dial tcp 127.0.0.1:8879: connect: connection refused.

    I am unable to resolve this issue. Could any one please help to resolve this.

    Thanks,

    1. I had a different error but maybe it helps, I updated in the example_recipe.yaml file the helm version to 2.17.0.

      I had this error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached:

    2. Try this . it might help
      helm
      init --client-only --skip-refresh helm repo rm stable helm repo add stable https://charts.helm.sh/
      1. Anonymous

        Problem still exists

      2. Anonymous

        It is working, thanks!

      3. Anonymous

        Correct repo add link: "

        helm repo add stable https://charts.helm.sh/stable
  10. Hey all,

    I am trying to deploy the RIC in it's Cherry-Release Version.

    But i am facing some issues:

    a) 

    The r4-alarmadater isnt getting released

    End of deploy-ric-platform script
    Error: release r4-alarmadapter failed: configmaps "configmap-ricplt-alarmadapter-appconfig" already exists
    root@max-near-rt-ric-cherry:/home/ubuntu/dep/bin# helm ls --all r4-alarmadapter
    + helm ls --all r4-alarmadapter
    NAME   REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
    r4-alarmadapter  1 Mon Jan 11 09:14:08 2021 FAILED alarmadapter-3.0.0 1.0 ricplt

    b)

    The pods aren't getting to their Running state. This might be connected to issue a) but I am not sure.

    kubectl get pods --all-namspaces:

    NAMESPACE     NAME                                                         READY   STATUS              RESTARTS   AGE
    kube-system   coredns-5644d7b6d9-794qq                                     1/1     Running             1          29m
    kube-system   coredns-5644d7b6d9-ph6tt                                     1/1     Running             1          29m
    kube-system   etcd-max-near-rt-ric-cherry                                  1/1     Running             1          28m
    kube-system   kube-apiserver-max-near-rt-ric-cherry                        1/1     Running             1          28m
    kube-system   kube-controller-manager-max-near-rt-ric-cherry               1/1     Running             1          28m
    kube-system   kube-flannel-ds-ljz7w                                        1/1     Running             1          29m
    kube-system   kube-proxy-cdkf4                                             1/1     Running             1          29m
    kube-system   kube-scheduler-max-near-rt-ric-cherry                        1/1     Running             1          28m
    kube-system   tiller-deploy-68bf6dff8f-xfkwd                               1/1     Running             1          27m
    ricinfra      deployment-tiller-ricxapp-d4f98ff65-wwbhx                    0/1     ContainerCreating   0          25m
    ricinfra      tiller-secret-generator-2nsb2                                0/1     ContainerCreating   0          25m
    ricplt        deployment-ricplt-a1mediator-66fcf76c66-njphv                0/1     ContainerCreating   0          23m
    ricplt        deployment-ricplt-appmgr-6fd6664755-r577d                    0/1     Init:0/1            0          24m
    ricplt        deployment-ricplt-e2mgr-6dfb6c4988-tb26k                     0/1     ContainerCreating   0          24m
    ricplt        deployment-ricplt-e2term-alpha-64965c46c6-5d59x              0/1     ContainerCreating   0          23m
    ricplt        deployment-ricplt-jaegeradapter-76ddbf9c9-fw4sh              0/1     ContainerCreating   0          22m
    ricplt        deployment-ricplt-o1mediator-d8b9fcdf-86qgg                  0/1     ContainerCreating   0          22m
    ricplt        deployment-ricplt-rtmgr-6d559897d8-jvbsb                     0/1     ContainerCreating   0          24m
    ricplt        deployment-ricplt-submgr-65bcd95469-nc8pq                    0/1     ContainerCreating   0          23m
    ricplt        deployment-ricplt-vespamgr-7458d9b5d-xlt8m                   0/1     ContainerCreating   0          22m
    ricplt        deployment-ricplt-xapp-onboarder-5958856fc8-kw9jx            0/2     ContainerCreating   0          25m
    ricplt        r4-infrastructure-kong-6c7f6db759-q5psw                      0/2     ContainerCreating   0          25m
    ricplt        r4-infrastructure-prometheus-alertmanager-75dff54776-mb8hn   0/2     ContainerCreating   0          25m
    ricplt        r4-infrastructure-prometheus-server-5fd7695-bvk74            1/1     Running             0          25m
    ricplt        statefulset-ricplt-dbaas-server-0                            0/1     ContainerCreating   0          25m

    Thanks in advance!
    1. Hello, Max

      I had same issues here, i solve it after restarting the VM, it seems i aint uncomment the RIC tested on infra.c on dep/tools/k8s/etc

      and the onap pods still running.

      Thanks,

  11. Hi All,

    I am trying to install rmr_nng library as a requirement of ric-app-kpimon  xapp.

    https://github.com/o-ran-sc/ric-plt-lib-rmr

    https://github.com/nanomsg/nng

    but getting below error(snippet):

    -- Installing: /root/ric-plt-lib-rmr/.build/lib/cmake/nng/nng-config-version.cmake
    -- Installing: /root/ric-plt-lib-rmr/.build/bin/nngcat
    -- Set runtime path of "/root/ric-plt-lib-rmr/.build/bin/nngcat" to "/root/ric-plt-lib-rmr/.build/lib"
    [ 40%] No test step for 'ext_nng'
    [ 41%] Completed 'ext_nng'
    [ 41%] Built target ext_nng
    Scanning dependencies of target nng_objects
    [ 43%] Building C object src/rmr/nng/CMakeFiles/nng_objects.dir/src/rmr_nng.c.o
    In file included from /root/ric-plt-lib-rmr/src/rmr/nng/src/rmr_nng.c:70:0:
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c: In function ‘roll_tables’:
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:406:28: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtgate’; did you mean ‘rtable’?
    pthread_mutex_lock( ctx->rtgate ); // must hold lock to move to active
    ^~~~~~
    rtable
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:409:30: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtgate’; did you mean ‘rtable’?
    pthread_mutex_unlock( ctx->rtgate );
    ^~~~~~
    rtable
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c: In function ‘parse_rt_rec’:
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:858:12: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtable_ready’; did you mean ‘rmr_ready’?
    ctx->rtable_ready = 1; // route based sends can now happen
    ^~~~~~~~~~~~
    rmr_ready

    ...

    ...

    I am unable to resolve this issue. Could any one please help to resolve this.

    Thanks,


  12. Hi All, 

    I am trying to get this up and running (first time ever).  

    For anyone interested, just wanted to highlight that if Step 4 fails with the following:

    root@ubuntu1804:~/dep/bin# ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml
    Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz
    Successfully packaged chart and saved it to: /tmp/aux-common-3.0.0.tgz
    Successfully packaged chart and saved it to: /tmp/nonrtric-common-2.0.0.tgz
    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
    ****************************************************************************************************************
                                                         ERROR                                                      
    ****************************************************************************************************************
    Can't locate the ric-common helm package in the local repo. Please make sure that it is properly installed.
    ****************************************************************************************************************

    You just need to initialize Helm repositories with the following (disregard the suggestion in the above error output as it's deprecated from Nov 2020):

    helm init --stable-repo-url=https://charts.helm.sh/stable --client-only 

    and then run Step 4.



  13. Anonymous

    Hi,

    We are trying to compile the pods on Arm and test them on Arm-based platform. We are failing for xapp-onboarder, ric-plt-e, ric-plt-alarmadapter. Has anyone tried it running the o-ran ric on Arm? Could someone point us to the recipe to build the images from the source? 

  14. Hi,

    I'm in the process of porting the near realtime RIC to ARM. The process has been straight forward up until now. Does anyone have instructions on how all the individual components are built or where they can be sourced outside of the prebuilt images. I have been able to build many items needed to complete the built but still having issues.

    images built:
    ric-plt-rtmgr:0.6.3
    it-dep-init:0.0.1
    ric-plt-rtmgr:0.6.3
    ric-plt-appmgr:0.4.3
    bldr-alpine3-go:2.0.0
    bldr-ubuntu18-c-go:1.9.0
    bldr-ubuntu18-c-go:9-u18.04
    ric-plt-a1:2.1.9
    ric-plt-dbaas:0.2.2
    ric-plt-e2mgr:5.0.8
    ric-plt-submgr:0.4.3
    ric-plt-vespamgr:0.4.0
    ric-plt-o1:0.4.4
    bldr-alpine3:12-a3.11
    bldr-alpine3-mdclog:0.0.4
    bldr-alpine3-rmr:4.0.5
    
    
    images needed:
    xapp-onboarder:1.0.7
    ric-plt-e2:5.0.8
    ric-plt-alarmadapter:0.4.5
    
    

    Please keep in mind I'm not sure if i built these all correctly since I have not found any instructions. The latest outputs of my effort can be seen here:

    NAME                                                         READY   STATUS                       RESTARTS   AGE
    deployment-ricplt-a1mediator-cb47dc85d-7cr9b                 0/1     CreateContainerConfigError   0          76s
    deployment-ricplt-e2term-alpha-8665fc56f6-m94ql              0/1     ImagePullBackOff             0          92s
    deployment-ricplt-jaegeradapter-76ddbf9c9-hg5pr              0/1     Error                        2          26s
    deployment-ricplt-o1mediator-86587dd94f-gbtvj                0/1     CreateContainerConfigError   0          9s
    deployment-ricplt-rtmgr-67c9bdccf6-g4nd6                     0/1     CreateContainerConfigError   0          63m
    deployment-ricplt-submgr-6ffd499fd5-6txqb                    0/1     CreateContainerConfigError   0          59s
    deployment-ricplt-vespamgr-68b68b78db-5m579                  1/1     Running                      0          43s
    deployment-ricplt-xapp-onboarder-579967799d-bp74x            0/2     ImagePullBackOff             17         63m
    r4-infrastructure-kong-6c7f6db759-4vlms                      0/2     CrashLoopBackOff             17         64m
    r4-infrastructure-prometheus-alertmanager-75dff54776-cq7fl   2/2     Running                      0          64m
    r4-infrastructure-prometheus-server-5fd7695-5jrwf            1/1     Running                      0          64m

    Any help would be great. My end goal is to push these changes upstream and to provide CI/CD for ARM images. 

    1. NAMESPACE     NAME                                                         READY   STATUS      RESTARTS   AGE
      kube-system   coredns-5644d7b6d9-6nwr8                                     1/1     Running     14         20d
      kube-system   coredns-5644d7b6d9-8lc5c                                     1/1     Running     14         20d
      kube-system   etcd-ip-10-0-0-199                                           1/1     Running     14         20d
      kube-system   kube-apiserver-ip-10-0-0-199                                 1/1     Running     14         20d
      kube-system   kube-controller-manager-ip-10-0-0-199                        1/1     Running     14         20d
      kube-system   kube-flannel-ds-4w4jh                                        1/1     Running     14         20d
      kube-system   kube-proxy-955gz                                             1/1     Running     14         20d
      kube-system   kube-scheduler-ip-10-0-0-199                                 1/1     Running     14         20d
      kube-system   tiller-deploy-7d5568dd96-ttbg5                               1/1     Running     1          20h
      ricinfra      deployment-tiller-ricxapp-6b6b4c787-w2nnw                    1/1     Running     0          2m40s
      ricinfra      tiller-secret-generator-c7z86                                0/1     Completed   0          2m40s
      ricplt        deployment-ricplt-e2term-alpha-8665fc56f6-q8cgq              0/1     Running     1          72s
      ricplt        deployment-ricplt-jaegeradapter-85cbdfbfbc-xlxt2             1/1     Running     0          12s
      ricplt        deployment-ricplt-rtmgr-67c9bdccf6-c9fck                     1/1     Running     0          101s
      ricplt        deployment-ricplt-submgr-6ffd499fd5-pmx4m                    1/1     Running     0          43s
      ricplt        deployment-ricplt-vespamgr-68b68b78db-4skbw                  1/1     Running     0          28s
      ricplt        deployment-ricplt-xapp-onboarder-57f78cfdf-xrtss             2/2     Running     0          2m10s
      ricplt        r4-infrastructure-kong-84cd44455-6pgxh                       2/2     Running     1          2m40s
      ricplt        r4-infrastructure-prometheus-alertmanager-75dff54776-fl4xh   2/2     Running     0          2m40s
      ricplt        r4-infrastructure-prometheus-server-5fd7695-jv8s4            1/1     Running     0          2m40s

      I have been able to get all services up except for e2term-alpha, I am getting an error:

      environments service name is SERVICE_RICPLT_E2TERM_RMR_ALPHA_SERVICE_HOST
      service ip is 10.X.X.95
      nano=38000
      loglevel=error
      volume=log
      #The key name of the environment holds the local ip address
      #ip address of the E2T in the RMR
      local-ip=10.X.X.95
      #prometheus mode can be pull or push
      prometheusMode=pull
      #timeout can be from 5 seconds to 300 seconds default is 10
      prometheusPushTimeOut=10
      prometheusPushAddr=127.0.0.1:7676
      prometheusPort=8088
      #trace is start, stop
      trace=stop
      external-fqdn=10.X.X.95
      #put pointer to the key that point to pod name
      pod_name=E2TERM_POD_NAME
      sctp-port=36422
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = nano=38000 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = nano value = 38000"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = loglevel=error "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = loglevel value = error"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = volume=log "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = volume value = log"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = local-ip=10.107.0.95 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = local-ip value = 10.107.0.95"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusMode=pull "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusMode value = pull"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPushTimeOut=10 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPushTimeOut value = 10"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPushAddr=127.0.0.1:7676 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPushAddr value = 127.0.0.1:7676"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPort=8088 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPort value = 8088"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = trace=stop "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = trace value = stop"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = external-fqdn=10.107.0.95 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = external-fqdn value = 10.107.0.95"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = pod_name=E2TERM_POD_NAME "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = pod_name value = E2TERM_POD_NAME"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = sctp-port=36422 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = sctp-port value = 36422"}
      1613156066 22/RMR [INFO] ric message routing library on SI95/g mv=3 flg=00 (84423e6 4.4.6 built: Feb 11 2021)
      e2: malloc.c:2401: sysmalloc: Assertion `(old_top == initial_top (av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && prev_inuse (old_top) && ((unsigned long) old_end & (pagesize - 1)) == 0)' failed.


      I'm not sure what to do next.

      1. Anonymous

        Did you get this resolved? I too have the same problem.

        By the way, I think "tiller-secret-generator-c7z86" is completed but not running. There seems to be no error with e2-term-alpha.

        1. I worked with Scott Daniels on the RMR library. Changes he made to a race condition fixed this. I'm using version 4.6.1, no additional fixes were needed.

  15. Hi all,

    I deployed the RIC and the SMO in two different VMs and I cannot manage to establish a connection between them to validate the A1 workflows. I suspect the reason is that the RIC VM is not exposing its services to the outside world. Could anyone help me address the issue? For example I would appreciate that if you have a setup like mine you could share the "example_recipe.yaml" you used to deploy the ric platform.

    I paste here the outcomes of the services running in ricplt namespace, where in all services it says EXTERNAL_IP = <none>. Please let me know if this is the expected behavior

    # kubectl get services -n ricplt
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    aux-entry ClusterIP 10.99.120.87 <none> 80/TCP,443/TCP 3d6h
    r4-infrastructure-kong-proxy NodePort 10.100.225.53 <none> 32080:32080/TCP,32443:32443/TCP 3d6h
    r4-infrastructure-prometheus-alertmanager ClusterIP 10.105.181.117 <none> 80/TCP 3d6h
    r4-infrastructure-prometheus-server ClusterIP 10.108.80.71 <none> 80/TCP 3d6h
    service-ricplt-a1mediator-http ClusterIP 10.111.176.180 <none> 10000/TCP 3d6h
    service-ricplt-a1mediator-rmr ClusterIP 10.105.1.73 <none> 4561/TCP,4562/TCP 3d6h
    service-ricplt-alarmadapter-http ClusterIP 10.104.93.39 <none> 8080/TCP 3d6h
    service-ricplt-alarmadapter-rmr ClusterIP 10.99.249.253 <none> 4560/TCP,4561/TCP 3d6h
    service-ricplt-appmgr-http ClusterIP 10.101.144.152 <none> 8080/TCP 3d6h
    service-ricplt-appmgr-rmr ClusterIP 10.110.71.188 <none> 4561/TCP,4560/TCP 3d6h
    service-ricplt-dbaas-tcp ClusterIP None <none> 6379/TCP 3d6h
    service-ricplt-e2mgr-http ClusterIP 10.103.232.14 <none> 3800/TCP 3d6h
    service-ricplt-e2mgr-rmr ClusterIP 10.105.41.119 <none> 4561/TCP,3801/TCP 3d6h
    service-ricplt-e2term-rmr-alpha ClusterIP 10.110.201.91 <none> 4561/TCP,38000/TCP 3d6h
    service-ricplt-e2term-sctp-alpha NodePort 10.106.243.219 <none> 36422:32222/SCTP 3d6h
    service-ricplt-jaegeradapter-agent ClusterIP 10.102.5.160 <none> 5775/UDP,6831/UDP,6832/UDP 3d6h
    service-ricplt-jaegeradapter-collector ClusterIP 10.99.78.42 <none> 14267/TCP,14268/TCP,9411/TCP 3d6h
    service-ricplt-jaegeradapter-query ClusterIP 10.100.166.185 <none> 16686/TCP 3d6h
    service-ricplt-o1mediator-http ClusterIP 10.101.145.78 <none> 9001/TCP,8080/TCP,3000/TCP 3d6h
    service-ricplt-o1mediator-tcp-netconf NodePort 10.110.230.66 <none> 830:30830/TCP 3d6h
    service-ricplt-rtmgr-http ClusterIP 10.104.100.69 <none> 3800/TCP 3d6h
    service-ricplt-rtmgr-rmr ClusterIP 10.98.40.180 <none> 4561/TCP,4560/TCP 3d6h
    service-ricplt-submgr-http ClusterIP None <none> 3800/TCP 3d6h
    service-ricplt-submgr-rmr ClusterIP None <none> 4560/TCP,4561/TCP 3d6h
    service-ricplt-vespamgr-http ClusterIP 10.102.12.213 <none> 8080/TCP,9095/TCP 3d6h
    service-ricplt-xapp-onboarder-http ClusterIP 10.107.167.33 <none> 8888/TCP,8080/TCP 3d6h


    This is the detail of the kong service in the RIC VM:

    # kubectl describe service r4-infrastructure-kong-proxy -n ricplt
    Name: r4-infrastructure-kong-proxy
    Namespace: ricplt
    Labels: app.kubernetes.io/instance=r4-infrastructure
    app.kubernetes.io/managed-by=Tiller
    app.kubernetes.io/name=kong
    app.kubernetes.io/version=1.4
    helm.sh/chart=kong-0.36.6
    Annotations: <none>
    Selector: app.kubernetes.io/component=app,app.kubernetes.io/instance=r4-infrastructure,app.kubernetes.io/name=kong
    Type: NodePort
    IP: 10.100.225.53
    Port: kong-proxy 32080/TCP
    TargetPort: 32080/TCP
    NodePort: kong-proxy 32080/TCP
    Endpoints: 10.244.0.26:32080
    Port: kong-proxy-tls 32443/TCP
    TargetPort: 32443/TCP
    NodePort: kong-proxy-tls 32443/TCP
    Endpoints: 10.244.0.26:32443
    Session Affinity: None
    External Traffic Policy: Cluster
    Events: <none>


    BR


    Daniel

  16. Deploy RIC using Recipe failed:

    When running './deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml' it failed with the following message:

    Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz

    Successfully packaged chart and saved it to: /tmp/aux-common-3.0.0.tgz

    Successfully packaged chart and saved it to: /tmp/nonrtric-common-2.0.0.tgz

    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).

    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)

    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).

    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)

    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).

    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)

    ****************************************************************************************************************

                                                         ERROR                                                      

    ****************************************************************************************************************

    Can't locate the ric-common helm package in the local repo. Please make sure that it is properly installed.

    ****************************************************************************************************************

    Running `helm init` as suggested doesn't help. Any idea?

  17. failures on two services: ricplt-e2mgr and ricplt-a1mediator.

    NAMESPACE     NAME                                                         READY   STATUS             RESTARTS   AGE
    kube-system   coredns-5644d7b6d9-6mk6z                                     1/1     Running            5          7d21h
    kube-system   coredns-5644d7b6d9-wxtqf                                     1/1     Running            5          7d21h
    kube-system   etcd-kubernetes-master                                       1/1     Running            5          7d21h
    kube-system   kube-apiserver-kubernetes-master                             1/1     Running            5          7d21h
    kube-system   kube-controller-manager-kubernetes-master                    1/1     Running            5          7d21h
    kube-system   kube-flannel-ds-gb67g                                        1/1     Running            5          7d21h
    kube-system   kube-proxy-hn2sq                                             1/1     Running            5          7d21h
    kube-system   kube-scheduler-kubernetes-master                             1/1     Running            5          7d21h
    kube-system   tiller-deploy-68bf6dff8f-4hlm9                               1/1     Running            9          7d21h
    ricinfra      deployment-tiller-ricxapp-6b6b4c787-ssvts                    1/1     Running            2          2d21h
    ricinfra      tiller-secret-generator-lgpxt                                0/1     Completed          0          2d21h
    ricplt        deployment-ricplt-a1mediator-cb47dc85d-nq4kc                 0/1     CrashLoopBackOff   21         66m
    ricplt        deployment-ricplt-appmgr-5fbcf5c7f7-dsf9q                    1/1     Running            1          2d1h
    ricplt        deployment-ricplt-e2mgr-7dbfbbb796-2nvgz                     0/1     CrashLoopBackOff   6          8m31s
    ricplt        deployment-ricplt-e2term-alpha-8665fc56f6-lcfwr              1/1     Running            4          2d21h
    ricplt        deployment-ricplt-jaegeradapter-85cbdfbfbc-ln7sk             1/1     Running            2          2d21h
    ricplt        deployment-ricplt-o1mediator-86587dd94f-hbs47                1/1     Running            1          2d1h
    ricplt        deployment-ricplt-rtmgr-67c9bdccf6-ckm9s                     1/1     Running            2          2d21h
    ricplt        deployment-ricplt-submgr-6ffd499fd5-z7x8l                    1/1     Running            2          2d21h
    ricplt        deployment-ricplt-vespamgr-68b68b78db-f6mw8                  1/1     Running            2          2d21h
    ricplt        deployment-ricplt-xapp-onboarder-57f78cfdf-4787b             2/2     Running            5          2d21h
    ricplt        r4-infrastructure-kong-84cd44455-rfnq4                       2/2     Running            7          2d21h
    ricplt        r4-infrastructure-prometheus-alertmanager-75dff54776-qzgnq   2/2     Running            4          2d21h
    ricplt        r4-infrastructure-prometheus-server-5fd7695-lsm8r            1/1     Running            2          2d21h

    deployment-ricplt-e2mgr errors:

    {"crit":"INFO","ts":1616189522367,"id":"E2Manager","msg":"#app.main - Configuration {logging.logLevel: info, http.port: 3800, rmr: { port: 3801, maxMsgSize: 65536}, routingManager.baseUrl: http://service-ricplt-rtmgr-http:3800/ric/v1/handles/, notificationResponseBuffer: 100, bigRedButtonTimeoutSec: 5, maxRnibConnectionAttempts: 3, rnibRetryIntervalMs: 10, keepAliveResponseTimeoutMs: 360000, keepAliveDelayMs: 120000, e2tInstanceDeletionTimeoutMs: 0, globalRicId: { ricId: AACCE, mcc: 310, mnc: 411}, rnibWriter: { stateChangeMessageChannel: RAN_CONNECTION_STATUS_CHANGE, ranManipulationChannel: RAN_MANIPULATION}","mdc":{"time":"2021-03-19 21:32:02.367"}}
    dial tcp 127.0.0.1:6379: connect: connection refused
    {"crit":"ERROR","ts":1616189522421,"id":"E2Manager","msg":"#app.main - Failed setting GENERAL key","mdc":{"time":"2021-03-19 21:32:02.421"}}
    

    deployment-ricplt-a1mediator errors:

    {"ts": 1616189735092, "crit": "DEBUG", "id": "a1.run", "mdc": {}, "msg": "A1Mediator starts"}
    {"ts": 1616189735092, "crit": "DEBUG", "id": "a1.run", "mdc": {}, "msg": "Starting RMR thread with RMR_RTG_SVC 4561, RMR_SEED_RT /opt/route/local.rt"}
    {"ts": 1616189735092, "crit": "DEBUG", "id": "a1.run", "mdc": {}, "msg": "RMR initialization must complete before webserver can start"}
    {"ts": 1616189735092, "crit": "DEBUG", "id": "a1.a1rmr", "mdc": {}, "msg": "Waiting for rmr to initialize.."}
    1616189735 1/RMR [INFO] ric message routing library on SI95 p=4562 mv=3 flg=02 (fd4477a 4.5.2 built: Mar 19 2021)
    {"ts": 1616189735593, "crit": "DEBUG", "id": "a1.a1rmr", "mdc": {}, "msg": "Work loop starting"}{"ts": 1616189735593, "crit": "DEBUG", "id": "a1.run", "mdc": {}, "msg": "RMR initialization complete"}{"ts": 1616189735594, "crit": "DEBUG", "id": "a1.run", "mdc": {}, "msg": "Starting gevent webserver on port 10000"}
    1616189736 1/RMR [INFO] sends: ts=1616189736 src=service-ricplt-a1mediator-rmr.ricplt:4562 target=service-ricxapp-admctrl-rmr.ricxapp:4563 open=0 succ=0 fail=0 (hard=0 soft=0)
    [2021-03-19 21:35:42,651] ERROR in app: Exception on /a1-p/healthcheck [GET]
    Traceback (most recent call last):
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/connection.py", line 1185, in get_connection
        connection = self._available_connections.pop()
    IndexError: pop from empty listDuring handling of the above exception, another exception occurred:Traceback (most recent call last):
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
        response = self.full_dispatch_request()
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
        rv = self.handle_user_exception(e)
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
        reraise(exc_type, exc_value, tb)
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
        raise value
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
        rv = self.dispatch_request()
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
        return self.view_functions[rule.endpoint](**req.view_args)
      File "/home/a1user/.local/lib/python3.8/site-packages/connexion/decorators/decorator.py", line 48, in wrapper
        response = function(request)
      File "/home/a1user/.local/lib/python3.8/site-packages/connexion/decorators/uri_parsing.py", line 144, in wrapper
        response = function(request)
      File "/home/a1user/.local/lib/python3.8/site-packages/connexion/decorators/parameter.py", line 121, in wrapper
        return function(**kwargs)
      File "/home/a1user/.local/lib/python3.8/site-packages/a1/controller.py", line 80, in get_healthcheck
        if not data.SDL.healthcheck():
      File "/home/a1user/.local/lib/python3.8/site-packages/ricxappframe/xapp_sdl.py", line 655, in healthcheck
        return self._sdl.is_active()
      File "/home/a1user/.local/lib/python3.8/site-packages/ricsdl/syncstorage.py", line 138, in is_active
        return self.__dbbackend.is_connected()
      File "/home/a1user/.local/lib/python3.8/site-packages/ricsdl/backend/redis.py", line 181, in is_connected
        return self.__redis.ping()
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/client.py", line 1378, in ping
        return self.execute_command('PING')
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/client.py", line 898, in execute_command
        conn = self.connection or pool.get_connection(command_name, **options)
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/connection.py", line 1187, in get_connection
        connection = self.make_connection()
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/connection.py", line 1227, in make_connection
        return self.connection_class(**self.connection_kwargs)
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/connection.py", line 509, in __init__
        self.port = int(port)
    TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
    ::ffff:10.244.0.1 - - [2021-03-19 21:35:42] "GET /a1-p/healthcheck HTTP/1.1" 500 387 0.005291
    [2021-03-19 21:35:43,474] ERROR in app: Exception on /a1-p/healthcheck [GET]
    Traceback (most recent call last):
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/connection.py", line 1185, in get_connection
        connection = self._available_connections.pop()
    IndexError: pop from empty list
    


    Looks like redis is missing, which service starts or runs this?

    1. dbaas is failing, this is the logs i see. I build the dbaas docker image as follows within the ric-plt-dbaas git repo:

      docker build --network=host -f docker/Dockerfile.redis -t ric-plt-dbaas:0.2.3 . 

      logs:

      284:C 23 Mar 2021 20:02:33.612 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
      284:C 23 Mar 2021 20:02:33.612 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=284, just started
      284:C 23 Mar 2021 20:02:33.612 # Configuration loaded
      284:M 23 Mar 2021 20:02:33.613 * Running mode=standalone, port=6379.
      284:M 23 Mar 2021 20:02:33.613 # Server initialized
      284:M 23 Mar 2021 20:02:33.613 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
      284:M 23 Mar 2021 20:02:33.613 * Module 'exstrings' loaded from /usr/local/libexec/redismodule/libredismodule.so
      284:M 23 Mar 2021 20:02:33.614 * Ready to accept connections
      284:signal-handler (1616529780) Received SIGTERM scheduling shutdown...
      284:M 23 Mar 2021 20:03:00.374 # User requested shutdown...
      284:M 23 Mar 2021 20:03:00.374 # Redis is now ready to exit, bye bye...


      I followed the ci-management yaml file for build arguments.

      Any ideas?

      1. ric-plt-dbaas:0.2.3 containz busybox with the command timeout. The -t option is not available in the arm version. Removal of that option in the probes fixes this issue.

  18. Anonymous

    Hello Everyone, 

    I am trying to install the cherry release on my VM. I am able to get all the containers running except for alarm manager.

    This is the error I get after running  ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yam Undo l

    Deleting outdated charts
    Error: render error in "alarmmanager/templates/serviceaccount.yaml": template: alarmmanager/templates/serviceaccount.yaml:20:11: executing "alarmmanager/templates/serviceaccount.yaml" at <include "common.serv...>: error calling include: template: no template "common.serviceaccountname.alarmmanager" associated with template "gotpl"


    When I run kubectl get pods --all-namespaces I get the following output


    NAMESPACE     NAME                                         READY       STATUS     RESTARTS     AGE
    kube-system     coredns-5644d7b6d9-4g4hg       1/1          Running       1               7h49m
    kube-system     coredns-5644d7b6d9-kcp7l         1/1          Running       1               7h49m
    kube-system     etcd-machine2                             1/1           Running       1              7h48m
    kube-system     kube-apiserver-machine2            1/1           Running       1              7h48m
    kube-system     kube-controller-manager-machine2 1/1     Running       1               7h49m
    kube-system     kube-flannel-ds-jtpk4                  1/1           Running       1               7h49m
    kube-system     kube-proxy-lk78t                         1/1           Running       1                7h49m
    kube-system     kube-scheduler-machine2           1/1           Running       1                7h48m
    kube-system     tiller-deploy-68bf6dff8f-n945d    1/1          Running        1                7h48m
    ricinfra               deployment-tiller-ricxapp-d4f98ff65-5bwxw     1/1     Running        0 20m
    ricinfra               tiller-secret-generator-nvpnk      0/1         Completed                       0 20m
    ricplt                 deployment-ricplt-a1mediator-66fcf76c66-wc5l8 1/1 Running          0 18m
    ricplt                 deployment-ricplt-appmgr-6fd6664755-m5tn9    1/1 Running          0 19m
    ricplt                 deployment-ricplt-e2mgr-6dfb6c4988-gwslx        1/1 Running          0 19m
    ricplt                 deployment-ricplt-e2term-alpha-7c8dc7bd94-5jwld 1/1 Running      0 18m
    ricplt                 deployment-ricplt-jaegeradapter-76ddbf9c9-lpg7b 1/1 Running       0 17m
    ricplt                deployment-ricplt-o1mediator-d8b9fcdf-sf5hf          1/1 Running       0 17m
    ricplt                deployment-ricplt-rtmgr-6d559897d8-txr4s               1/1 Running      2 19m
    ricplt               deployment-ricplt-submgr-5c5fb9f65f-x9z5k             1/1  Running      0 18m
    ricplt               deployment-ricplt-vespamgr-7458d9b5d-tp5xb        1/1  Running      0 18m
    ricplt               deployment-ricplt-xapp-onboarder-5958856fc8-46mfb 2/2 Running   0 19m
    ricplt               r4-infrastructure-kong-6c7f6db759-lgjql                    2/2   Running      1 20m
    ricplt               r4-infrastructure-prometheus-alertmanager-75dff54776-z9ztf 2/2  Running        0 20m
    ricplt               r4-infrastructure-prometheus-server-5fd7695-4cg2m 1/1 Running 0 20m

     There are only 14 ricplt pods running


    My guess is that since the tiller-secret-generator-nvpnk is not "Running", the alarmadapter container is not created.


    Any ideas? I am not sure how to proceed next

    1. Anonymous

      Hi, I am stuck at the same point. Were you able to fix this issue?

      1. Anonymous

        I also got same error and stuck at the same point.



        NAMESPACE NAME READY STATUS RESTARTS AGE

        kube-system coredns-5644d7b6d9-hp7pl 1/1 Running 1 88m

        kube-system coredns-5644d7b6d9-vbsf4 1/1 Running 1 88m

        kube-system etcd-rkota-nrtric 1/1 Running 1 87m

        kube-system kube-apiserver-rkota-nrtric 1/1 Running 1 87m

        kube-system kube-controller-manager-rkota-nrtric 1/1 Running 3 88m

        kube-system kube-flannel-ds-kj7fd 1/1 Running 1 88m

        kube-system kube-proxy-dwfmq 1/1 Running 1 88m

        kube-system kube-scheduler-rkota-nrtric 1/1 Running 2 87m

        kube-system tiller-deploy-68bf6dff8f-clzp6 1/1 Running 1 87m

        ricinfra deployment-tiller-ricxapp-d4f98ff65-7dwbf 1/1 Running 0 23m

        ricinfra tiller-secret-generator-dbrts 0/1 Completed 0 23m

        ricplt deployment-ricplt-a1mediator-66fcf76c66-7prjf 1/1 Running 0 20m

        ricplt deployment-ricplt-appmgr-6fd6664755-vfswc 1/1 Running 0 21m

        ricplt deployment-ricplt-e2mgr-6dfb6c4988-5j5mr 1/1 Running 0 20m

        ricplt deployment-ricplt-e2term-alpha-64965c46c6-lhgvp 1/1 Running 0 20m

        ricplt deployment-ricplt-jaegeradapter-76ddbf9c9-vzkbb 1/1 Running 0 19m

        ricplt deployment-ricplt-o1mediator-d8b9fcdf-8gcgp 1/1 Running 0 19m

        ricplt deployment-ricplt-rtmgr-6d559897d8-ts5xh 1/1 Running 6 20m

        ricplt deployment-ricplt-submgr-65bcd95469-v7bc2 1/1 Running 0 20m

        ricplt deployment-ricplt-vespamgr-7458d9b5d-dbqzv 1/1 Running 0 19m

        ricplt deployment-ricplt-xapp-onboarder-5958856fc8-jzp4g 2/2 Running 0 22m

        ricplt r4-infrastructure-kong-6c7f6db759-4hbgh 2/2 Running 1 23m

        ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-vn6d8 2/2 Running 0 23m

        ricplt r4-infrastructure-prometheus-server-5fd7695-gvtm5 1/1 Running 0 23m

        ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 22m


    2. Hi, I also got the problem.  I try to use "git clone http://gerrit.o-ran-sc.org/r/it/dep " to install  Near Realtime RIC. 

      Successful to finish it.   But I don't know how to resolve the problem when use "git clone http://gerrit.o-ran-sc.org/r/it/dep -b cherry" to install.

      Have you resolved the problem?

  19. Anonymous

    Hi all,

    My e2term repeatedly restarts. kubectl describe says the liveness probe has failed and I can verify that by logging into the pod. Below is the kubectl describe and kubectl logs output. I am using the latest source from http://gerrit.o-ran-sc.org/r/it/dep with the included recipe yaml.

    Any ideas where I can look to debug this?

    Thanks.


    root@e2term-alpha:/opt/e2# ./rmr_probe -h 0.0.0.0:38000 -t 10
    1617905013 88/RMR [INFO] ric message routing library on SI95/g mv=3 flg=01 (84423e6 4.4.6 built: Dec 4 2020)
    [FAIL] too few messages recevied during timeout window: wanted 1 got 0


    $ sudo kubectl --namespace=ricplt describe pod/deployment-ricplt-e2term-alpha-57ff7bf4f6-jttpk
    Name: deployment-ricplt-e2term-alpha-57ff7bf4f6-jttpk
    Namespace: ricplt
    Priority: 0
    Node: vm/192.168.122.166
    Start Time: Thu, 08 Apr 2021 14:00:26 -0400
    Labels: app=ricplt-e2term-alpha
    pod-template-hash=57ff7bf4f6
    release=r4-e2term
    Annotations: cni.projectcalico.org/podIP: 10.1.238.220/32
    cni.projectcalico.org/podIPs: 10.1.238.220/32
    Status: Running
    IP: 10.1.238.220
    IPs:
    IP: 10.1.238.220
    Controlled By: ReplicaSet/deployment-ricplt-e2term-alpha-57ff7bf4f6
    Containers:
    container-ricplt-e2term:
    Container ID: containerd://2bb8ecc931f0d972edea35e8a50af818144b937f13f74554917c0bb91ca7499a
    Image: nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2:5.4.8
    Image ID: nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2@sha256:ed8e1ce6214d039b24c3b7426756b6fc947e2c4e99d384c5de1778ae30188251
    Ports: 4561/TCP, 38000/TCP, 36422/SCTP, 8088/TCP
    Host Ports: 0/TCP, 0/TCP, 0/SCTP, 0/TCP
    State: Running
    Started: Thu, 08 Apr 2021 14:05:07 -0400
    Last State: Terminated
    Reason: Error
    Exit Code: 137
    Started: Thu, 08 Apr 2021 14:03:57 -0400
    Finished: Thu, 08 Apr 2021 14:05:07 -0400
    Ready: False
    Restart Count: 4
    Liveness: exec [/bin/sh -c /opt/e2/rmr_probe -h 0.0.0.0:38000] delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness: exec [/bin/sh -c /opt/e2/rmr_probe -h 0.0.0.0:38000] delay=15s timeout=1s period=10s #success=1 #failure=3
    Environment Variables from:
    configmap-ricplt-e2term-env-alpha ConfigMap Optional: false
    Environment: <none>
    Mounts:
    /data/outgoing/ from vol-shared (rw)
    /opt/e2/router.txt from local-router-file (rw,path="router.txt")
    /tmp/rmr_verbose from local-router-file (rw,path="rmr_verbose")
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-24dss (ro)
    Conditions:
    Type Status
    Initialized True
    Ready False
    ContainersReady False
    PodScheduled True
    Volumes:
    local-router-file:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: configmap-ricplt-e2term-router-configmap
    Optional: false
    vol-shared:
    Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName: pvc-ricplt-e2term-alpha
    ReadOnly: false
    default-token-24dss:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-24dss
    Optional: false
    QoS Class: BestEffort
    Node-Selectors: <none>
    Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
    node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal Scheduled 5m7s default-scheduler Successfully assigned ricplt/deployment-ricplt-e2term-alpha-57ff7bf4f6-jttpk to vm
    Normal Pulled 3m56s (x2 over 5m6s) kubelet Container image "nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2:5.4.8" already present on machine
    Normal Created 3m56s (x2 over 5m6s) kubelet Created container container-ricplt-e2term
    Normal Started 3m56s (x2 over 5m6s) kubelet Started container container-ricplt-e2term
    Normal Killing 3m16s (x2 over 4m26s) kubelet Container container-ricplt-e2term failed liveness probe, will be restarted
    Warning Unhealthy 2m55s (x11 over 4m45s) kubelet Readiness probe failed:
    Warning Unhealthy 6s (x13 over 4m46s) kubelet Liveness probe failed:



    $ sudo kubectl --namespace=ricplt logs pod/deployment-ricplt-e2term-alpha-57ff7bf4f6-jttpk
    environments service name is SERVICE_RICPLT_E2TERM_RMR_ALPHA_SERVICE_HOST
    service ip is 10.152.183.120
    nano=38000
    loglevel=error
    volume=log
    #The key name of the environment holds the local ip address
    #ip address of the E2T in the RMR
    local-ip=10.152.183.120
    #prometheus mode can be pull or push
    prometheusMode=pull
    #timeout can be from 5 seconds to 300 seconds default is 10
    prometheusPushTimeOut=10
    prometheusPushAddr=127.0.0.1:7676
    prometheusPort=8088
    #trace is start, stop
    trace=stop
    external-fqdn=10.152.183.120
    #put pointer to the key that point to pod name
    pod_name=E2TERM_POD_NAME
    sctp-port=36422
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = nano=38000 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = nano value = 38000"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = loglevel=error "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = loglevel value = error"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = volume=log "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = volume value = log"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = local-ip=10.152.183.120 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = local-ip value = 10.152.183.120"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusMode=pull "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusMode value = pull"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPushTimeOut=10 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPushTimeOut value = 10"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPushAddr=127.0.0.1:7676 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPushAddr value = 127.0.0.1:7676"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPort=8088 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPort value = 8088"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = trace=stop "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = trace value = stop"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = external-fqdn=10.152.183.120 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = external-fqdn value = 10.152.183.120"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = pod_name=E2TERM_POD_NAME "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = pod_name value = E2TERM_POD_NAME"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = sctp-port=36422 "}
    {"ts":1617905177451,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = sctp-port value = 36422"}
    1617905177 23/RMR [INFO] ric message routing library on SI95/g mv=3 flg=00 (84423e6 4.4.6 built: Dec 4 2020)
    1617905178 23/RMR [INFO] sends: ts=1617905178 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-admission-ctrl-xapp-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905178 23/RMR [INFO] sends: ts=1617905178 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-rsm-rmr.ricplt:4801 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905178 23/RMR [INFO] sends: ts=1617905178 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2mgr-rmr.ricplt:3801 open=1 succ=1 fail=0 (hard=0 soft=0)
    1617905178 23/RMR [INFO] sends: ts=1617905178 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricxapp-ueec-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905178 23/RMR [INFO] sends: ts=1617905178 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-a1mediator-rmr.ricplt:4562 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-admission-ctrl-xapp-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-rsm-rmr.ricplt:4801 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43246 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43606 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2mgr-rmr.ricplt:3801 open=1 succ=1 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricxapp-ueec-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-a1mediator-rmr.ricplt:4562 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43705 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-admission-ctrl-xapp-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-rsm-rmr.ricplt:4801 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43246 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43606 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2mgr-rmr.ricplt:3801 open=1 succ=1 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricxapp-ueec-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-a1mediator-rmr.ricplt:4562 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43705 open=0 succ=0 fail=0 (hard=0 soft=0)

  20. Hello, I have installed RIC onto my linux machine, and I am about to put the SMO on a VM.  I've noticed people mention that they have two VM's  (one for RIC and one for SMO), and I was wondering if there is a particular reason for this?  Thanks.

    1. I have the exact same question...

    2. Hello again Mooga,

      One Question: Are you running the bronze release or the cherry release?

      Thank You.

      1. Hello, i'm running bronze release

        1. Thank you for your response!

          Since you have successfully installed RIC, in step 4 when you run this command:

          kubectl get pods -n ricplt

          You got 15 pods like me or 16 pods like the RIC guidelines?

          Because i am stuck in the Step 5 ... The following command doesnt produce any result and i don´t know if these issues could be related...

          curl --location--request POST "http://$(hostname):32080/onboard/api/v1/onboard/download"  --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"

          You followed all the guidelines with no problems?

          Thank you Mooga.

          1. I have 16 pods running, but I'm not sure if these problems are related or not.  Try using the address of the Kong controller rather than $(hostname).  This can be found using:

            kubectl get service -A | grep 32080

  21. Anonymous

    Hello all, 

    I am setting up the Cherry Release for the first time and working of Cherry branch.  Had the HELM Repository error when trying step 4 and got past that by adding the stable charts repository. Retried the deploy-ric-platform command and am getting this error at the end: 

    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    Error: render error in "alarmmanager/templates/serviceaccount.yaml": template: alarmmanager/templates/serviceaccount.yaml:20:11: executing "alarmmanager/templates/serviceaccount.yaml" at <include "common.serv...>: error calling include: template: no template "common.serviceaccountname.alarmmanager" associated with template "gotpl"

    Could anyone help with this please?

    1. I have looked into their repository.

      The problem is in cherry branch, the alarmmanager is called the other name, called "alarmadapter".

      So it can't find the name "alarmmanager", and can't pull it correctly.

      In the step1, you should directly pull the master branch, they already solved the problem in master branch.

      But they didn't merge into cherry branch.

      Or, another solution is change "alarmadapter" to "alrammanager" in ric-common/Common-Template/helm/ric-common/templates manually.

    2. There was an update in the discuss mailing list that addressed this issue. After cloning the cherry branch and updating the submodules, one can run this command to get the alarmmanager pod up and running:

      git fetch "https://gerrit.o-ran-sc.org/r/it/dep" refs/changes/68/5968/1 && git checkout FETCH_HEAD

      This was a patch so hopefully it still works.

    3. Anonymous

      Same problem,

      Can anyone help?

  22. Hi everyone,

    I know this question has already posted here.

    When i run the following command in Step 4:

    kubectl get pods -n ricplt


    Produces this result:


    root@rafaelmatos-VirtualBox:~/ric/dep# kubectl get pods -n ricplt

    NAME READY STATUS RESTARTS AGE

    deployment-ricplt-a1mediator-66fcf76c66-mlnb5 1/1 Running 2 3d2h

    deployment-ricplt-alarmadapter-64d559f769-5dgxh 1/1 Running 2 3d2h

    deployment-ricplt-appmgr-6fd6664755-zcdmr 1/1 Running 2 3d2h

    deployment-ricplt-e2mgr-8479fb5ff8-drvdz 1/1 Running 2 3d2h

    deployment-ricplt-e2term-alpha-bcb457df4-6tjlc 1/1 Running 3 3d2h

    deployment-ricplt-jaegeradapter-84558d855b-dj97n 1/1 Running 2 3d2h

    deployment-ricplt-o1mediator-d8b9fcdf-g7f2m 1/1 Running 2 3d2h

    deployment-ricplt-rtmgr-9d4847788-xwdcz 1/1 Running 4 3d2h

    deployment-ricplt-submgr-65dc9f4995-5vzwc 1/1 Running 2 3d2h

    deployment-ricplt-vespamgr-7458d9b5d-ppmrs 1/1 Running 2 3d2h

    deployment-ricplt-xapp-onboarder-546b86b5c4-52jjl 2/2 Running 4 3d2h

    r4-infrastructure-kong-6c7f6db759-8snpr 2/2 Running 5 3d2h

    r4-infrastructure-prometheus-alertmanager-75dff54776-pjr7j 2/2 Running 4 3d2h

    r4-infrastructure-prometheus-server-5fd7695-5k4hw 1/1 Running 2 3d2h

    statefulset-ricplt-dbaas-server-0 1/1 Running 2 3d2h

    All the pods are in status Running but i have only 15 pods and not 16.

    Is this a problem?? I am doing the configuration in Bronze release.

    Rafael Matos



  23. Hi everyone,

    I followed this article step by step, but I think I can't get the step 4 works correctly. As you can see the results form kubectl get pods -n ricplt one of the pods keeps crashing:

    NAME                                                         READY   STATUS             RESTARTS   AGE
    deployment-ricplt-a1mediator-66fcf76c66-lbzg9                1/1     Running            0          10m
    deployment-ricplt-alarmadapter-64d559f769-s576w              1/1     Running            0          9m42s
    deployment-ricplt-appmgr-6fd6664755-d99vn                    1/1     Running            0          11m
    deployment-ricplt-e2mgr-8479fb5ff8-b6gb9                     1/1     Running            0          11m
    deployment-ricplt-e2term-alpha-bcb457df4-llzd7               1/1     Running            0          10m
    deployment-ricplt-jaegeradapter-84558d855b-rpx4m             1/1     Running            0          10m
    deployment-ricplt-o1mediator-d8b9fcdf-hqkcc                  1/1     Running            0          9m54s
    deployment-ricplt-rtmgr-9d4847788-d756d                      1/1     Running            6          11m
    deployment-ricplt-submgr-65dc9f4995-zwndb                    1/1     Running            0          10m
    deployment-ricplt-vespamgr-7458d9b5d-99bwf                   1/1     Running            0          10m
    deployment-ricplt-xapp-onboarder-546b86b5c4-6qj82            2/2     Running            0          11m
    r4-infrastructure-kong-6c7f6db759-vw7pr                      1/2     CrashLoopBackOff   6          12m
    r4-infrastructure-prometheus-alertmanager-75dff54776-d2hcs   2/2     Running            0          12m
    r4-infrastructure-prometheus-server-5fd7695-46g4f            1/1     Running            0          12m
    statefulset-ricplt-dbaas-server-0                            1/1     Running            0          11m


    Step 5 doesn't seem to work while I have this problem. I have been looking for a solution on the internet but couldn't find any.

    I'll appreciate any help.

    1. Hello xlaxc,

      usually port 32080 is occupied by some kube service. you can check this by :

      $ kubectl get service -A | grep 32080

      #A work around is to use port forwarding :

      $ sudo kubectl port-forward r4-infrastructure-kong-6c7f6db759-vw7pr 32088:32080 -n ricplt &

      #remember to change "$(hostname):32080" to "localhost:32088" in subsequent commands. also make sure all pods are running


      1. Hello Litombe,

        Thanks for your help, I really appreciate it.

        The port forwarding solution doesn't solve the crashing problem for me (I am still having that CrashLoopBackOff status).

        I tried to run the command from step 5 anyway by executing:
        curl --location --request POST "http://localhost:32088/onboard/api/v1/onboard/download--header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"

        But I am getting this result:
        {"message":"no Route matched with those values"}

        1. Hello Xlaxc,

          I thought you were facing a problem with onboarding of the apps. My earlier solution was with regards to that.

          I recently faced the 'crashloopbackoff' problem in a different k8 cluster and this fixed it.

            rm -rf ~/.helm  
            helm init --stable-repo-url=https://charts.helm.sh/stable --client-only
            helm serve &
            helm repo add local http://127.0.0.1:8879

          #try this before step 4. it might fix your problems.

          1. Hello Litombe,

            Thanks for your reply.

            Unfortunately, that didn't solve my problem. I am starting to think that there is some network configuration in my VM that I am missing.

        2. Anonymous

          Hello xlaxc,

          Did you resolve?? this I'm also facing the same issue

    2. Hi, I think you have installed SMO in the same machine.

      SMO and RIC can not be installed in the same machine.

      You can see this from  Getting Started page's "Figure 1: Demo deployment architecture" picture.

      There are two VM to installed SMO and RIC

      1. Hi,
        Thanks for your reply,

        I didn't install SMO at all. For now, I only tried to install RIC, but I can't get it to work.


  24. Hello everyone,

    I am stuck in step 3. I ran the following command ./k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh , and it is pending forever as it is shown below.

    I would appreciate some help. Thank you very much

    --------------------

    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    + NUMPODS=0
    + echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
    > waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
    + '[' 0 -lt 8 ']'
    + sleep 5
    ++ eval 'kubectl get pods -n kube-system  | grep "Running" | wc -l'
    +++ grep Running
    +++ wc -l
    +++ kubectl get pods -n kube-system
    W0601 15:30:40.385897   24579 loader.go:221] Config not found: /root/.kube/config
    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    + NUMPODS=0
    + echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
    > waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
    + '[' 0 -lt 8 ']'
    + sleep 5

    1. I had the same problem. Just check f there is any fatal error. In my case I allocated 1 CPU that was the problem.

  25. Anonymous

    Even I am having the same problem.


    1. I tried to install Docker first and then follow the instructions and it worked.

      1. Anonymous

        I get the following error in the same step 3:


        Failed to pull image "quay.io/coreos/flannel:v0.14.0": rpc response from daemon: Get https://quay.io/v2/: Forbidden


        It only runs 5 of the 8 pods, so I get stuck there. Anyone knows how to solve it


    2. I was also stuck in the same issue and found below error

      Setting up docker.io (20.10.2-0ubuntu1~18.04.2) ...
      Job for docker.service failed because the control process exited with error code.
      See "systemctl status docker.service" and "journalctl -xe" for details.
      invoke-rc.d: initscript docker, action "start" failed.
      ● docker.service - Docker Application Container Engine
      Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
      Active: activating (auto-restart) (Result: exit-code) since Sat 2021-06-12 13:42:16 UTC; 9ms ago
      Docs: https://docs.docker.com
      Process: 13474 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
      Main PID: 13474 (code=exited, status=1/FAILURE)

      Then i have removed the docker as following

      • sudo apt remove docker.io
      • sudo apt purge docker.io
      • in case /var/lib/docker or /var/lib/docker.migating still exist, remove them


      After this steps i tried again and it was successful.

  26. Hello everyone,

    i got some error,

    ricxapp-trafficxapp-58d4946955-xrcll 0/1 ErrImagePull


    this ricxapp-trafficxapp keep error image pull, did anyone faced same issue?


    Br,

    Junior

  27. Hi everyone,

    I've installed the cherry release platform and have everything running well except onboarding the xApps.  I'm currently stuck on onboarding xApps using dms_cli.  The error that I am getting for the hw app and even the dummy xapp in the test folder is: "xApp Chart Name not found".  The error that I receive with the config file in the guide folder is: "Input validation failed".

    Is anyone else having these difficulties?  I've performed the dms_cli onboarding step on two different computers, but receiving the same error on both.  I've also used json scheme validators to make sure the xapp's config file is validated by the schma file and it passes.  

    The steps that I've performed are on this page: Installation Guides — it-dep master documentation (o-ran-sc.org)

    Any thoughts?

    Thank you!

  28. Anonymous

    Hello everyone,
    I`m having the issue when trying to run a1-ric.sh script from demo folder:

    output1
    {
        "error_message": "'xapp_name' is a required property",
        "error_source": "config-file.json",
        "status": "Input payload validation failed"
    }

    but qpdriver json template is fully valid:

    qpdriver.json config file
    {
          "xapp_name": "qpdriver",
          "version": "1.1.0",
          "containers": [
              {
                  "name": "qpdriver",
                  "image": {
                      "registry": "nexus3.o-ran-sc.org:10002",
                      "name": "o-ran-sc/ric-app-qp-driver",
                      "tag": "1.0.9"
                  }
              }
          ],
          "messaging": {
              "ports": [
                  {
                    "name": "rmr-data",
                    "container": "qpdriver",
                    "port": 4560,
                    "rxMessages": [
                      "TS_UE_LIST"
                    ],
                    "txMessages": [ "TS_QOE_PRED_REQ", "RIC_ALARM" ],
                    "policies": [20008],
                    "description": "rmr receive data port for qpdriver"
                  },
                  {
                    "name": "rmr-route",
                    "container": "qpdriver",
                    "port": 4561,
                    "description": "rmr route port for qpdriver"
                  }
              ]
          },
          "controls": {
              "example_int": 10000,
              "example_str": "value"
          },
          "rmr": {
              "protPort": "tcp:4560",
              "maxSize": 2072,
              "numWorkers": 1,
              "rxMessages": [
                "TS_UE_LIST"
              ],
              "txMessages": [
                "TS_QOE_PRED_REQ",
                "RIC_ALARM"
              ],
              "policies": [20008]
          }
    }

    What could the reason of such output?

  29. Hello Everyone!


    I followed this article step by step, but I think I can't get the step 4 works correctly. As you can see the results form kubectl get pods -n ricplt one of the pods keeps ImagePullBackOff and soon will be ErrImagePull.


    deployment-ricplt-a1mediator-74b45794bb-jvnbq 1/1 Running 0 3m53s

    deployment-ricplt-alarmadapter-64d559f769-szmc5 1/1 Running 0 2m41s

    deployment-ricplt-appmgr-6fd6664755-6lbsc 1/1 Running 0 4m51s

    deployment-ricplt-e2mgr-6df485fcc-zhxmg 1/1 Running 0 4m22s

    deployment-ricplt-e2term-alpha-76848bd77c-z7rdb 1/1 Running 0 4m8s

    deployment-ricplt-jaegeradapter-84558d855b-ft528 1/1 Running 0 3m9s

    deployment-ricplt-o1mediator-d8b9fcdf-zjgwf 1/1 Running 0 2m55s

    deployment-ricplt-rtmgr-57999f4bc4-qzkhk 1/1 Running 0 4m37s

    deployment-ricplt-submgr-b85bd46c6-4jql4 1/1 Running 0 3m38s

    deployment-ricplt-vespamgr-77cf6c6d57-vbtz5 1/1 Running 0 3m24s

    deployment-ricplt-xapp-onboarder-79866d9d9c-fnbw2 2/2 Running 0 5m6s

    r4-infrastructure-kong-6c7f6db759-rcfft 1/2 ImagePullBackOff 0 5m35s

    r4-infrastructure-prometheus-alertmanager-75dff54776-4rhqm 2/2 Running 0 5m35s

    r4-infrastructure-prometheus-server-5fd7695-bxkzb 1/1 Running 0 5m35s

    statefulset-ricplt-dbaas-server-0 1/1 Running 0 5m20s

    1. Try this

      "kubectl edit deploy r4-infrastructure-kong -n ricplt"

      and in line 68, change the original line into "image: kong/kubernetes-ingress-controller:0.7.0"

      Tbh, I don't know this is the appropriate way or not, but it can solve "ImagePullBackOff" problem.

      1. Anonymous

        Hi,

        Thanks for your reply, it totally works.

  30. Anonymous

    Hi experts,

    I tried to deploy Near-RT RIC with D release.

    After  I onboarding the hw xapp, then I ran this command:

    curl --location --request POST "http://127.0.0.1:32088/appmgr/ric/v1/xapps"  --header 'Content-Type: application/json' --data-raw '{"xappName": "hwxapp"}'

    (The port 32080  is occupied by the r4-infrastructure-kong-proxy, so I use port-forwarding.)

    And the output is

    "operation XappDeployXapp has not yet been implemented."

    It cannot create instance now. 

    So it indicates that the new appmgr may be under construction ?




    1. where did you pull D release ?
      master branch?

      1. Anonymous

        I just deploy near-rt ric platform with  dep/ric-dep/RECIPE/example_recipe_oran_dawn_release.yaml

        you can use 

        git submodule update --init --recursive --remote  to get the update file : ric-dep


  31. Anonymous

    Hello Team!

    When I am trying to verify the results of the step 4 with the comand kubectl get pods -n ricplt, one of all the pods of RIC have the status ImagePullBackOff, I am installing the cherry version. Do someone know how to fix it? Thank you all...


    NAME                                                         READY   STATUS             RESTARTS   AGE
    deployment-ricplt-a1mediator-55fdf8b969-v545f                1/1     Running            0          14m
    deployment-ricplt-appmgr-65d4c44c85-4kjk9                    1/1     Running            0          16m
    deployment-ricplt-e2mgr-95b7f7b4-7d4h4                       1/1     Running            0          15m
    deployment-ricplt-e2term-alpha-7dc47d54-c8gg4                0/1     CrashLoopBackOff   7          15m
    deployment-ricplt-jaegeradapter-7f574b5d95-v5gm5             1/1     Running            0          14m
    deployment-ricplt-o1mediator-8b6684b7c-vhlr8                 1/1     Running            0          13m
    deployment-ricplt-rtmgr-6bbdf7685b-sgvwc                     1/1     Running            2          15m
    deployment-ricplt-submgr-7754d5f8bc-kf6kk                    1/1     Running            0          14m
    deployment-ricplt-vespamgr-589bbb7f7b-zt4zm                  1/1     Running            0          14m
    deployment-ricplt-xapp-onboarder-7f6bf9bfb-4lrjz             2/2     Running            0          16m
    r4-infrastructure-kong-86bfd9f7c5-zkblz                      1/2     ImagePullBackOff   0          16m
    r4-infrastructure-kong-bdff668dd-fjr7k                       0/2     Running            1          8s
    r4-infrastructure-prometheus-alertmanager-54c67c94fd-nxn42   2/2     Running            0          16m
    r4-infrastructure-prometheus-server-6f5c784c4c-x54bb         1/1     Running            0          16m
    statefulset-ricplt-dbaas-server-0                            1/1     Running            0          16m



    1. Anonymous

      I just want how to solve the problem of the pod whose STATUS is CrashLoopBackOff

    2. Anonymous

      And the case where the pod`s description is CrashLoopBackOff??

      Do anyone knows how to solve it?

  32. Anonymous

    Hello Team,

    I am trying to deploy the RIC in it's Cherry-Release Version.

    But i am facing some issues:

    Error: template: influxdb/templates/statefulset.yaml:19:11: executing "influxdb/templates/statefulset.yaml" at <include "common.fullname.influxdb" .>: error calling include: template: no template "common.fullname.influxdb" associated with template "gotpl"


    Can anyone help me out? Thanks for the solution


  33. Anonymous

    Hello,

    I am trying to run the following command:

    curl --location --request POST "http://localhost:32088/onboard/api/v1/onboard/download" --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"

    I'm always getting the following error:

    AttributeError: 'ConnectionError' object has no attribute 'message'

    Any idea about the solution?