Skip to end of metadata
Go to start of metadata
VM Minimum Requirements for RIC 22

NOTE: sudo access is required for installation

Getting Started PDF

Step 1: Obtaining the Deployment Scripts and Charts

Run ...

$ sudo -i

$ git clone http://gerrit.o-ran-sc.org/r/it/dep -b bronze

$ cd dep
$ git submodule update --init --recursive --remote
Step 2: Generation of cloud-init Script 

Run ...

$ cd tools/k8s/bin
./gen-cloud-init.sh   # will generate the stack install script for what RIC needs

Note: The outputted script is will be used for preparing K8 cluster for RIC deployment (k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh)

Step 3: Installation of Kubernetes, Helm, Docker, etc.

Run ...

$ ./k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh

NOTE: Be patient as this takes some time to complete. Upon completion of this script, the VM will be rebooted.  You will then need to login to the VM and run sudo once again.

$ sudo -i

$ kubectl get pods --all-namespaces  # There should be  9 pods running in kube-system namespace.

Step 4:  Deploy RIC using Recipe

Run ...

$ cd dep/bin
$ ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml
$ kubectl get pods -n ricplt   # There should be ~16 pods running in the ricplt namespace.  


Step 5:  Onboarding a Test xAPP (HelloWorld xApp)

NOTE: If using a version less than Ubuntu 18.04 this section will fail!
Run...

$ cd dep

# Create the file that will contain the URL used to start the on-boarding process...
$ echo '{ "config-file.json_url": "https://gerrit.o-ran-sc.org/r/gitweb?p=ric-app/hw.git;a=blob_plain;f=init/config-file.json;hb=HEAD" }' > onboard.hw.url

# Start on-boarding process...

curl --location --request POST "http://$(hostname):32080/onboard/api/v1/onboard/download"  --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"


# Verify list of all on-boarded xApps...
$ curl --location --request GET "http://$(hostname):32080/onboard/api/v1/charts"
Step 6:  Deploy Test xApp (HelloWorld xApp)

Run..

#  Verify xApp is not running...  This may take a minute so refresh the command below

kubectl get pods -n ricxapp


# Call xApp Manager to deploy HelloWorld xApp...

curl --location --request POST "http://$(hostname):32080/appmgr/ric/v1/xapps"  --header 'Content-Type: application/json'  --data-raw '{"xappName": "hwxapp"}'


#  Verify xApp is running...

kubectl get pods -n ricxapp


#  View logs...

$ kubectl logs -n ricxapp <name of POD retrieved from statement above>


Helpful Hints

Kubectl commads:

kubectl get pods -n nampespace - gets a list of Pods running

kubectl get logs -n namespace name_of_running_pod






Complete these tasks to get started


Recent space activity

Space contributors

{"mode":"list","scope":"descendants","limit":"5","showLastTime":"true","order":"update","contextEntityId":20875202}

  • No labels

179 Comments

  1. Thanks for the guidelines. It works well.

  2. Hi experts,

    after deployment, i have only 15 pods in ns ricplt, is it normal?

    The "Step 4" says: "kubectl get pods -n ricplt   # There should be ~16 pods running in the ricplt namespace.".

    (18:11 dabs@ricpltbronze bin) > sudo kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5644d7b6d9-clvtf 1/1 Running 5 32h
    kube-system coredns-5644d7b6d9-jwxnm 1/1 Running 5 32h
    kube-system etcd-ricpltbronze 1/1 Running 11 32h
    kube-system kube-apiserver-ricpltbronze 1/1 Running 28 32h
    kube-system kube-controller-manager-ricpltbronze 1/1 Running 9 32h
    kube-system kube-flannel-ds-amd64-mrwn2 1/1 Running 16 32h
    kube-system kube-proxy-zrtl8 1/1 Running 6 32h
    kube-system kube-scheduler-ricpltbronze 1/1 Running 8 32h
    kube-system tiller-deploy-68bf6dff8f-wbmwl 1/1 Running 4 32h
    ricinfra deployment-tiller-ricxapp-d4f98ff65-6h4n4 1/1 Running 0 3h13m
    ricinfra tiller-secret-generator-tgkzf 0/1 Completed 0 132m
    ricinfra tiller-secret-generator-zcx72 0/1 Error 0 3h13m
    ricplt deployment-ricplt-a1mediator-66fcf76c66-h6rp2 1/1 Running 0 40m
    ricplt deployment-ricplt-alarmadapter-64d559f769-glb5z 1/1 Running 0 30m
    ricplt deployment-ricplt-appmgr-6fd6664755-2mxjb 1/1 Running 0 42m
    ricplt deployment-ricplt-e2mgr-8479fb5ff8-9zqbp 1/1 Running 0 41m
    ricplt deployment-ricplt-e2term-alpha-bcb457df4-4dz62 1/1 Running 0 40m
    ricplt deployment-ricplt-jaegeradapter-84558d855b-tmqqb 1/1 Running 0 39m
    ricplt deployment-ricplt-o1mediator-d8b9fcdf-f4sgm 1/1 Running 0 34m
    ricplt deployment-ricplt-rtmgr-9d4847788-kf6r4 1/1 Running 10 41m
    ricplt deployment-ricplt-submgr-65dc9f4995-gt5kb 1/1 Running 0 40m
    ricplt deployment-ricplt-vespamgr-7458d9b5d-klh9l 1/1 Running 0 39m
    ricplt deployment-ricplt-xapp-onboarder-546b86b5c4-xkcpt 2/2 Running 0 42m
    ricplt r4-infrastructure-kong-6c7f6db759-7xjqm 2/2 Running 21 3h13m
    ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-jfkdg 2/2 Running 2 3h13m
    ricplt r4-infrastructure-prometheus-server-5fd7695-pprg2 1/1 Running 2 3h13m
    ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 43m

    1. Anonymous

      After deployment I'm also getting only 15 pods running. Is it normal or do I need to worry about that?

      1. it's normal, and you are good to go. just try policy management.

  3. Hi,

    Even though deployment of RIC plt did not give any errors, most of the RIC plt and infra pods remain in error state. 

    kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5644d7b6d9-dz774 1/1 Running 1 10h
    kube-system coredns-5644d7b6d9-fp586 1/1 Running 1 10h
    kube-system etcd-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-apiserver-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-controller-manager-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-flannel-ds-amd64-b4l97 1/1 Running 1 10h
    kube-system kube-proxy-fxfrk 1/1 Running 1 10h
    kube-system kube-scheduler-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system tiller-deploy-68bf6dff8f-jvtk7 1/1 Running 1 10h
    ricinfra deployment-tiller-ricxapp-d4f98ff65-hcnqf 0/1 ContainerCreating 0 9h
    ricinfra tiller-secret-generator-d7kmk 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-a1mediator-66fcf76c66-l2z7m 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-alarmadapter-64d559f769-7d8fq 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-appmgr-6fd6664755-7lp8q 0/1 Init:ImagePullBackOff 0 9h
    ricplt deployment-ricplt-e2mgr-8479fb5ff8-ggnx8 0/1 ErrImagePull 0 9h
    ricplt deployment-ricplt-e2term-alpha-bcb457df4-dkdbc 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-jaegeradapter-84558d855b-bpzcv 1/1 Running 0 9h
    ricplt deployment-ricplt-o1mediator-d8b9fcdf-5ptcs 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-rtmgr-9d4847788-rvnrx 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-submgr-65dc9f4995-cbhvc 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-vespamgr-7458d9b5d-bkzpg 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-xapp-onboarder-546b86b5c4-g2dnt 1/2 ImagePullBackOff 0 9h
    ricplt r4-infrastructure-kong-6c7f6db759-4czbj 2/2 Running 2 9h
    ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-gwxzz 2/2 Running 0 9h
    ricplt r4-infrastructure-prometheus-server-5fd7695-phcqs 1/1 Running 0 9h
    ricplt statefulset-ricplt-dbaas-server-0 0/1 ImagePullBackOff 0 9h


    any clue?

    1. Anonymous

      The ImagePullBackOff and ErrrImagePull errors are all for container images built from O-RAN SC code.  It appears that there is a problem for the docker engine in your installation fetching images from the O-RAN SC docker registry.  Oftentimes this is due to local firewall blocking such connections.

      You may want to try:  docker pull nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-a1:2.1.9 to see if your docker engine can retrieve this image. 

      1. Thanks for the information.

        I think you are right, even docker pull nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-a1:2.1.9 is giving an error of connection refused..

        although I am checking for the corporate firewall to disable it, the strange thing is docker pull hello-world and other such pulls are working fine.

        1. "Connection refused" error does suggest network connection problem.

          These O-RAN SC docker registries use port 10001 ~ 10004, in particular 10002 for all released docker images.  They are not the default docker registry of 5000.  It is possible that your local firewall has a rule allowing outgoing connections into port 5000, but not these ports.

          1. Anonymous

            Thanks for the explanation. you are right these ports were blocked on firewall, working fine now.
            currently i am stuck with opening o1 dashboard, trying it on smo machine.

          2. thanks for the explanation. you are right these particular ports were getting blocked by firewall, got it working now.

            currently i am stuck in opening o1 dashboard, trying it on smo machine.

            1. Hi Team,


              Can you please share the link so that i am able to download the docker image manually and then load the same. I am also getting connection refused but not able to solve this thing. Meanwhile plz share the link so that it will help for me.


              Br

              Deepak

  4. Anonymous

    Hi all,

    after executing the below POST command. Iam not getting any response.

    # Start on-boarding process...

    curl --location --request POST "http://$(hostname):32080/onboard/api/v1/onboard/download"  --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"


    any clues/suggestions?

    1. it's probably caused by port 32080 has already been occupied by kube-proxy.

      you can either try using direct ip address, or using port-forwarding as a workaround. pls refer to my post: https://blog.csdn.net/jeffyko/article/details/107426626

      1. Anonymous

        Thanks a lot, it resolved the issue and we are able to proceed further,

        After executing the below command we are seeing the below logs. Is this the correct behavior?? In case this is the error, how to resolve this??

        $ kubectl logs -n ricxapp <name of POD retrieved from statement above>

        Error from server (BadRequest): container "hwxapp" in pod "ricxapp-hwxapp-684d8d675b-xz99d" is waiting to start: ContainerCreating

        1. make sure hwxapp Pod is running. It needs pull the 'hwxapp' docker image, which may take some time.

          kubectl get pods -n ricxapp | grep hwxapp

      2. Anonymous

        Hi Zhengwei, I have tried the solution in your blog, but I still receive the same error: AttributeError:  'ConnectionError' object has no attribute 'message'. Do you have any clues? Thanks.

      3. EDIT: Nevermind, silly me.  For anyone wondering, the port forwarding command needs to keep running for the forwarding to stay active.  So i just run it in a Screen tab keep it running there, and run the curl commands in a different tab

        Hi, I've tried this workaround but the port forwarding command just hangs and never completes.  Anyone experiencing the same issue? Kube cluster and pods seem healthy:

        root@ubuntu1804:~# kubectl -v=4 port-forward r4-infrastructure-kong-6c7f6db759-kkjtt 32088:32080 -n ricplt

        Forwarding from 127.0.0.1:32088 -> 32080

        (hangs after the previous message)



        1. same troble occured.

          Do someone know how to fix it?

      4. hi,

        i'm able to invoke the API calls into the xApp On-boarder,this works but subsequent deploy always fails saying upstream server is timing out 

        curl --location --request POST "http://$(hostname):32080/onboard/api/v1/onboard/download" --header 'Content-Type: application/json' --data-binary "@./onboard.ts.url"


        curl --location --request POST "http://$(hostname):32080/appmgr/ric/v1/xapps" --header 'Content-Type: app
        lication/json' --data-raw '{"xappName": "trafficxapp"}'
        The upstream server is timing out

        Any suggestions for this ?

        1. I'm setting up the RIC for the first time and having the same problem 'The upstream server is timing out'.

          I have tried using the Kong pod's explicit IP address but get the same behavior. Any ideas? 

          Thanks

  5. Anonymous

    In my case, the script only works with helm 2.x. how about others? 

  6. THANK You! This is great!

  7. Anonymous

    Hi all,

    I use the source code to build the hw_unit_test container and I execute the hw_unit_test, it stops and returns "SHAREDDATALAYER_ABORT, file src/redis/asynchirediscommanddispatcher.cpp, line 206: Required Redis module extension commands not available.
    Aborted (core dumped)".

    Thus, I can't tset my hwxapp.

    any clues/suggestions? Thanks!

      

  8. Anonymous

    Hi all,

    There is a pod crash every time. When I run the following command, I got one crashed pod. Did you meet the same case? Thanks so much for giving me some suggestions.

    root@chuanhao-HP-EliteDesk-800-G4-SFF:~/dep/bin# kubectl get pods -n ricplt

    NAME READY STATUS RESTARTS AGE

    deployment-ricplt-a1mediator-66fcf76c66-f6kbh 1/1 Running 1 2m16s

    deployment-ricplt-alarmadapter-64d559f769-twfk7 1/1 Running 0 46s

    deployment-ricplt-appmgr-6fd6664755-7rs4g 1/1 Running 0 3m49s

    deployment-ricplt-e2mgr-8479fb5ff8-j9nzf 1/1 Running 0 3m

    deployment-ricplt-e2term-alpha-bcb457df4-r22nb 1/1 Running 0 2m39s

    deployment-ricplt-jaegeradapter-84558d855b-xfgd5 1/1 Running 0 78s

    deployment-ricplt-o1mediator-d8b9fcdf-tpz7v 1/1 Running 0 64s

    deployment-ricplt-rtmgr-9d4847788-scrxf 1/1 Running 1 3m26s

    deployment-ricplt-submgr-65dc9f4995-knzjd 1/1 Running 0 113s

    deployment-ricplt-vespamgr-7458d9b5d-mdmjx 1/1 Running 0 96s

    deployment-ricplt-xapp-onboarder-546b86b5c4-z2qd6 2/2 Running 0 4m16s

    r4-infrastructure-kong-6c7f6db759-44wdx 1/2 CrashLoopBackOff 6 4m52s

    r4-infrastructure-prometheus-alertmanager-75dff54776-qlp4g 2/2 Running 0 4m52s

    r4-infrastructure-prometheus-server-5fd7695-lr6z7 1/1 Running 0 4m52s

    statefulset-ricplt-dbaas-server-0 1/1 Running 0 4m33s


  9. Hi All,

    I am trying to deploy RIC and at step 4 while run command "./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml". I am getting this below error:

    root@ubuntu:~/dep/bin# ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml
    Error: unknown command "home" for "helm"
    Run 'helm --help' for usage.
    Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Successfully packaged chart and saved it to: /tmp/aux-common-3.0.0.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Successfully packaged chart and saved it to: /tmp/nonrtric-common-2.0.0.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Error: open /repository/local/index.yaml822716896: no such file or directory
    Error: no repositories configured
    Error: looks like "http://127.0.0.1:8879/charts" is not a valid chart repository or cannot be reached: Get "http://127.0.0.1:8879/charts/index.yaml": dial tcp 127.0.0.1:8879: connect: connection refused.

    I am unable to resolve this issue. Could any one please help to resolve this.

    Thanks,

    1. I had a different error but maybe it helps, I updated in the example_recipe.yaml file the helm version to 2.17.0.

      I had this error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached:

    2. Try this . it might help
      helm
      init --client-only --skip-refresh helm repo rm stable helm repo add stable https://charts.helm.sh/
      1. Anonymous

        Problem still exists

      2. Anonymous

        It is working, thanks!

      3. Anonymous

        Correct repo add link: "

        helm repo add stable https://charts.helm.sh/stable
    3. Anonymous

      See wether you are using helm 3.xxx

      For the ORan stuff to work you need helm 2.xxx 

      helm 3.xx commands are different even init does not exist enymore

  10. Hey all,

    I am trying to deploy the RIC in it's Cherry-Release Version.

    But i am facing some issues:

    a) 

    The r4-alarmadater isnt getting released

    End of deploy-ric-platform script
    Error: release r4-alarmadapter failed: configmaps "configmap-ricplt-alarmadapter-appconfig" already exists
    root@max-near-rt-ric-cherry:/home/ubuntu/dep/bin# helm ls --all r4-alarmadapter
    + helm ls --all r4-alarmadapter
    NAME   REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
    r4-alarmadapter  1 Mon Jan 11 09:14:08 2021 FAILED alarmadapter-3.0.0 1.0 ricplt

    b)

    The pods aren't getting to their Running state. This might be connected to issue a) but I am not sure.

    kubectl get pods --all-namspaces:

    NAMESPACE     NAME                                                         READY   STATUS              RESTARTS   AGE
    kube-system   coredns-5644d7b6d9-794qq                                     1/1     Running             1          29m
    kube-system   coredns-5644d7b6d9-ph6tt                                     1/1     Running             1          29m
    kube-system   etcd-max-near-rt-ric-cherry                                  1/1     Running             1          28m
    kube-system   kube-apiserver-max-near-rt-ric-cherry                        1/1     Running             1          28m
    kube-system   kube-controller-manager-max-near-rt-ric-cherry               1/1     Running             1          28m
    kube-system   kube-flannel-ds-ljz7w                                        1/1     Running             1          29m
    kube-system   kube-proxy-cdkf4                                             1/1     Running             1          29m
    kube-system   kube-scheduler-max-near-rt-ric-cherry                        1/1     Running             1          28m
    kube-system   tiller-deploy-68bf6dff8f-xfkwd                               1/1     Running             1          27m
    ricinfra      deployment-tiller-ricxapp-d4f98ff65-wwbhx                    0/1     ContainerCreating   0          25m
    ricinfra      tiller-secret-generator-2nsb2                                0/1     ContainerCreating   0          25m
    ricplt        deployment-ricplt-a1mediator-66fcf76c66-njphv                0/1     ContainerCreating   0          23m
    ricplt        deployment-ricplt-appmgr-6fd6664755-r577d                    0/1     Init:0/1            0          24m
    ricplt        deployment-ricplt-e2mgr-6dfb6c4988-tb26k                     0/1     ContainerCreating   0          24m
    ricplt        deployment-ricplt-e2term-alpha-64965c46c6-5d59x              0/1     ContainerCreating   0          23m
    ricplt        deployment-ricplt-jaegeradapter-76ddbf9c9-fw4sh              0/1     ContainerCreating   0          22m
    ricplt        deployment-ricplt-o1mediator-d8b9fcdf-86qgg                  0/1     ContainerCreating   0          22m
    ricplt        deployment-ricplt-rtmgr-6d559897d8-jvbsb                     0/1     ContainerCreating   0          24m
    ricplt        deployment-ricplt-submgr-65bcd95469-nc8pq                    0/1     ContainerCreating   0          23m
    ricplt        deployment-ricplt-vespamgr-7458d9b5d-xlt8m                   0/1     ContainerCreating   0          22m
    ricplt        deployment-ricplt-xapp-onboarder-5958856fc8-kw9jx            0/2     ContainerCreating   0          25m
    ricplt        r4-infrastructure-kong-6c7f6db759-q5psw                      0/2     ContainerCreating   0          25m
    ricplt        r4-infrastructure-prometheus-alertmanager-75dff54776-mb8hn   0/2     ContainerCreating   0          25m
    ricplt        r4-infrastructure-prometheus-server-5fd7695-bvk74            1/1     Running             0          25m
    ricplt        statefulset-ricplt-dbaas-server-0                            0/1     ContainerCreating   0          25m

    Thanks in advance!
    1. Hello, Max

      I had same issues here, i solve it after restarting the VM, it seems i aint uncomment the RIC tested on infra.c on dep/tools/k8s/etc

      and the onap pods still running.

      Thanks,

  11. Hi All,

    I am trying to install rmr_nng library as a requirement of ric-app-kpimon  xapp.

    https://github.com/o-ran-sc/ric-plt-lib-rmr

    https://github.com/nanomsg/nng

    but getting below error(snippet):

    -- Installing: /root/ric-plt-lib-rmr/.build/lib/cmake/nng/nng-config-version.cmake
    -- Installing: /root/ric-plt-lib-rmr/.build/bin/nngcat
    -- Set runtime path of "/root/ric-plt-lib-rmr/.build/bin/nngcat" to "/root/ric-plt-lib-rmr/.build/lib"
    [ 40%] No test step for 'ext_nng'
    [ 41%] Completed 'ext_nng'
    [ 41%] Built target ext_nng
    Scanning dependencies of target nng_objects
    [ 43%] Building C object src/rmr/nng/CMakeFiles/nng_objects.dir/src/rmr_nng.c.o
    In file included from /root/ric-plt-lib-rmr/src/rmr/nng/src/rmr_nng.c:70:0:
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c: In function ‘roll_tables’:
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:406:28: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtgate’; did you mean ‘rtable’?
    pthread_mutex_lock( ctx->rtgate ); // must hold lock to move to active
    ^~~~~~
    rtable
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:409:30: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtgate’; did you mean ‘rtable’?
    pthread_mutex_unlock( ctx->rtgate );
    ^~~~~~
    rtable
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c: In function ‘parse_rt_rec’:
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:858:12: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtable_ready’; did you mean ‘rmr_ready’?
    ctx->rtable_ready = 1; // route based sends can now happen
    ^~~~~~~~~~~~
    rmr_ready

    ...

    ...

    I am unable to resolve this issue. Could any one please help to resolve this.

    Thanks,


  12. Hi All, 

    I am trying to get this up and running (first time ever).  

    For anyone interested, just wanted to highlight that if Step 4 fails with the following:

    root@ubuntu1804:~/dep/bin# ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml
    Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz
    Successfully packaged chart and saved it to: /tmp/aux-common-3.0.0.tgz
    Successfully packaged chart and saved it to: /tmp/nonrtric-common-2.0.0.tgz
    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
    ****************************************************************************************************************
                                                         ERROR                                                      
    ****************************************************************************************************************
    Can't locate the ric-common helm package in the local repo. Please make sure that it is properly installed.
    ****************************************************************************************************************

    You just need to initialize Helm repositories with the following (disregard the suggestion in the above error output as it's deprecated from Nov 2020):

    helm init --stable-repo-url=https://charts.helm.sh/stable --client-only 

    and then run Step 4.



    1. This works to me. Thanks

  13. Anonymous

    Hi,

    We are trying to compile the pods on Arm and test them on Arm-based platform. We are failing for xapp-onboarder, ric-plt-e, ric-plt-alarmadapter. Has anyone tried it running the o-ran ric on Arm? Could someone point us to the recipe to build the images from the source? 

  14. Hi,

    I'm in the process of porting the near realtime RIC to ARM. The process has been straight forward up until now. Does anyone have instructions on how all the individual components are built or where they can be sourced outside of the prebuilt images. I have been able to build many items needed to complete the built but still having issues.

    images built:
    ric-plt-rtmgr:0.6.3
    it-dep-init:0.0.1
    ric-plt-rtmgr:0.6.3
    ric-plt-appmgr:0.4.3
    bldr-alpine3-go:2.0.0
    bldr-ubuntu18-c-go:1.9.0
    bldr-ubuntu18-c-go:9-u18.04
    ric-plt-a1:2.1.9
    ric-plt-dbaas:0.2.2
    ric-plt-e2mgr:5.0.8
    ric-plt-submgr:0.4.3
    ric-plt-vespamgr:0.4.0
    ric-plt-o1:0.4.4
    bldr-alpine3:12-a3.11
    bldr-alpine3-mdclog:0.0.4
    bldr-alpine3-rmr:4.0.5
    
    
    images needed:
    xapp-onboarder:1.0.7
    ric-plt-e2:5.0.8
    ric-plt-alarmadapter:0.4.5
    
    

    Please keep in mind I'm not sure if i built these all correctly since I have not found any instructions. The latest outputs of my effort can be seen here:

    NAME                                                         READY   STATUS                       RESTARTS   AGE
    deployment-ricplt-a1mediator-cb47dc85d-7cr9b                 0/1     CreateContainerConfigError   0          76s
    deployment-ricplt-e2term-alpha-8665fc56f6-m94ql              0/1     ImagePullBackOff             0          92s
    deployment-ricplt-jaegeradapter-76ddbf9c9-hg5pr              0/1     Error                        2          26s
    deployment-ricplt-o1mediator-86587dd94f-gbtvj                0/1     CreateContainerConfigError   0          9s
    deployment-ricplt-rtmgr-67c9bdccf6-g4nd6                     0/1     CreateContainerConfigError   0          63m
    deployment-ricplt-submgr-6ffd499fd5-6txqb                    0/1     CreateContainerConfigError   0          59s
    deployment-ricplt-vespamgr-68b68b78db-5m579                  1/1     Running                      0          43s
    deployment-ricplt-xapp-onboarder-579967799d-bp74x            0/2     ImagePullBackOff             17         63m
    r4-infrastructure-kong-6c7f6db759-4vlms                      0/2     CrashLoopBackOff             17         64m
    r4-infrastructure-prometheus-alertmanager-75dff54776-cq7fl   2/2     Running                      0          64m
    r4-infrastructure-prometheus-server-5fd7695-5jrwf            1/1     Running                      0          64m

    Any help would be great. My end goal is to push these changes upstream and to provide CI/CD for ARM images. 

    1. NAMESPACE     NAME                                                         READY   STATUS      RESTARTS   AGE
      kube-system   coredns-5644d7b6d9-6nwr8                                     1/1     Running     14         20d
      kube-system   coredns-5644d7b6d9-8lc5c                                     1/1     Running     14         20d
      kube-system   etcd-ip-10-0-0-199                                           1/1     Running     14         20d
      kube-system   kube-apiserver-ip-10-0-0-199                                 1/1     Running     14         20d
      kube-system   kube-controller-manager-ip-10-0-0-199                        1/1     Running     14         20d
      kube-system   kube-flannel-ds-4w4jh                                        1/1     Running     14         20d
      kube-system   kube-proxy-955gz                                             1/1     Running     14         20d
      kube-system   kube-scheduler-ip-10-0-0-199                                 1/1     Running     14         20d
      kube-system   tiller-deploy-7d5568dd96-ttbg5                               1/1     Running     1          20h
      ricinfra      deployment-tiller-ricxapp-6b6b4c787-w2nnw                    1/1     Running     0          2m40s
      ricinfra      tiller-secret-generator-c7z86                                0/1     Completed   0          2m40s
      ricplt        deployment-ricplt-e2term-alpha-8665fc56f6-q8cgq              0/1     Running     1          72s
      ricplt        deployment-ricplt-jaegeradapter-85cbdfbfbc-xlxt2             1/1     Running     0          12s
      ricplt        deployment-ricplt-rtmgr-67c9bdccf6-c9fck                     1/1     Running     0          101s
      ricplt        deployment-ricplt-submgr-6ffd499fd5-pmx4m                    1/1     Running     0          43s
      ricplt        deployment-ricplt-vespamgr-68b68b78db-4skbw                  1/1     Running     0          28s
      ricplt        deployment-ricplt-xapp-onboarder-57f78cfdf-xrtss             2/2     Running     0          2m10s
      ricplt        r4-infrastructure-kong-84cd44455-6pgxh                       2/2     Running     1          2m40s
      ricplt        r4-infrastructure-prometheus-alertmanager-75dff54776-fl4xh   2/2     Running     0          2m40s
      ricplt        r4-infrastructure-prometheus-server-5fd7695-jv8s4            1/1     Running     0          2m40s

      I have been able to get all services up except for e2term-alpha, I am getting an error:

      environments service name is SERVICE_RICPLT_E2TERM_RMR_ALPHA_SERVICE_HOST
      service ip is 10.X.X.95
      nano=38000
      loglevel=error
      volume=log
      #The key name of the environment holds the local ip address
      #ip address of the E2T in the RMR
      local-ip=10.X.X.95
      #prometheus mode can be pull or push
      prometheusMode=pull
      #timeout can be from 5 seconds to 300 seconds default is 10
      prometheusPushTimeOut=10
      prometheusPushAddr=127.0.0.1:7676
      prometheusPort=8088
      #trace is start, stop
      trace=stop
      external-fqdn=10.X.X.95
      #put pointer to the key that point to pod name
      pod_name=E2TERM_POD_NAME
      sctp-port=36422
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = nano=38000 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = nano value = 38000"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = loglevel=error "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = loglevel value = error"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = volume=log "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = volume value = log"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = local-ip=10.107.0.95 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = local-ip value = 10.107.0.95"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusMode=pull "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusMode value = pull"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPushTimeOut=10 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPushTimeOut value = 10"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPushAddr=127.0.0.1:7676 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPushAddr value = 127.0.0.1:7676"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPort=8088 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPort value = 8088"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = trace=stop "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = trace value = stop"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = external-fqdn=10.107.0.95 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = external-fqdn value = 10.107.0.95"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = pod_name=E2TERM_POD_NAME "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = pod_name value = E2TERM_POD_NAME"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = sctp-port=36422 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = sctp-port value = 36422"}
      1613156066 22/RMR [INFO] ric message routing library on SI95/g mv=3 flg=00 (84423e6 4.4.6 built: Feb 11 2021)
      e2: malloc.c:2401: sysmalloc: Assertion `(old_top == initial_top (av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && prev_inuse (old_top) && ((unsigned long) old_end & (pagesize - 1)) == 0)' failed.


      I'm not sure what to do next.

      1. Anonymous

        Did you get this resolved? I too have the same problem.

        By the way, I think "tiller-secret-generator-c7z86" is completed but not running. There seems to be no error with e2-term-alpha.

        1. I worked with Scott Daniels on the RMR library. Changes he made to a race condition fixed this. I'm using version 4.6.1, no additional fixes were needed.

  15. Hi all,

    I deployed the RIC and the SMO in two different VMs and I cannot manage to establish a connection between them to validate the A1 workflows. I suspect the reason is that the RIC VM is not exposing its services to the outside world. Could anyone help me address the issue? For example I would appreciate that if you have a setup like mine you could share the "example_recipe.yaml" you used to deploy the ric platform.

    I paste here the outcomes of the services running in ricplt namespace, where in all services it says EXTERNAL_IP = <none>. Please let me know if this is the expected behavior

    # kubectl get services -n ricplt
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    aux-entry ClusterIP 10.99.120.87 <none> 80/TCP,443/TCP 3d6h
    r4-infrastructure-kong-proxy NodePort 10.100.225.53 <none> 32080:32080/TCP,32443:32443/TCP 3d6h
    r4-infrastructure-prometheus-alertmanager ClusterIP 10.105.181.117 <none> 80/TCP 3d6h
    r4-infrastructure-prometheus-server ClusterIP 10.108.80.71 <none> 80/TCP 3d6h
    service-ricplt-a1mediator-http ClusterIP 10.111.176.180 <none> 10000/TCP 3d6h
    service-ricplt-a1mediator-rmr ClusterIP 10.105.1.73 <none> 4561/TCP,4562/TCP 3d6h
    service-ricplt-alarmadapter-http ClusterIP 10.104.93.39 <none> 8080/TCP 3d6h
    service-ricplt-alarmadapter-rmr ClusterIP 10.99.249.253 <none> 4560/TCP,4561/TCP 3d6h
    service-ricplt-appmgr-http ClusterIP 10.101.144.152 <none> 8080/TCP 3d6h
    service-ricplt-appmgr-rmr ClusterIP 10.110.71.188 <none> 4561/TCP,4560/TCP 3d6h
    service-ricplt-dbaas-tcp ClusterIP None <none> 6379/TCP 3d6h
    service-ricplt-e2mgr-http ClusterIP 10.103.232.14 <none> 3800/TCP 3d6h
    service-ricplt-e2mgr-rmr ClusterIP 10.105.41.119 <none> 4561/TCP,3801/TCP 3d6h
    service-ricplt-e2term-rmr-alpha ClusterIP 10.110.201.91 <none> 4561/TCP,38000/TCP 3d6h
    service-ricplt-e2term-sctp-alpha NodePort 10.106.243.219 <none> 36422:32222/SCTP 3d6h
    service-ricplt-jaegeradapter-agent ClusterIP 10.102.5.160 <none> 5775/UDP,6831/UDP,6832/UDP 3d6h
    service-ricplt-jaegeradapter-collector ClusterIP 10.99.78.42 <none> 14267/TCP,14268/TCP,9411/TCP 3d6h
    service-ricplt-jaegeradapter-query ClusterIP 10.100.166.185 <none> 16686/TCP 3d6h
    service-ricplt-o1mediator-http ClusterIP 10.101.145.78 <none> 9001/TCP,8080/TCP,3000/TCP 3d6h
    service-ricplt-o1mediator-tcp-netconf NodePort 10.110.230.66 <none> 830:30830/TCP 3d6h
    service-ricplt-rtmgr-http ClusterIP 10.104.100.69 <none> 3800/TCP 3d6h
    service-ricplt-rtmgr-rmr ClusterIP 10.98.40.180 <none> 4561/TCP,4560/TCP 3d6h
    service-ricplt-submgr-http ClusterIP None <none> 3800/TCP 3d6h
    service-ricplt-submgr-rmr ClusterIP None <none> 4560/TCP,4561/TCP 3d6h
    service-ricplt-vespamgr-http ClusterIP 10.102.12.213 <none> 8080/TCP,9095/TCP 3d6h
    service-ricplt-xapp-onboarder-http ClusterIP 10.107.167.33 <none> 8888/TCP,8080/TCP 3d6h


    This is the detail of the kong service in the RIC VM:

    # kubectl describe service r4-infrastructure-kong-proxy -n ricplt
    Name: r4-infrastructure-kong-proxy
    Namespace: ricplt
    Labels: app.kubernetes.io/instance=r4-infrastructure
    app.kubernetes.io/managed-by=Tiller
    app.kubernetes.io/name=kong
    app.kubernetes.io/version=1.4
    helm.sh/chart=kong-0.36.6
    Annotations: <none>
    Selector: app.kubernetes.io/component=app,app.kubernetes.io/instance=r4-infrastructure,app.kubernetes.io/name=kong
    Type: NodePort
    IP: 10.100.225.53
    Port: kong-proxy 32080/TCP
    TargetPort: 32080/TCP
    NodePort: kong-proxy 32080/TCP
    Endpoints: 10.244.0.26:32080
    Port: kong-proxy-tls 32443/TCP
    TargetPort: 32443/TCP
    NodePort: kong-proxy-tls 32443/TCP
    Endpoints: 10.244.0.26:32443
    Session Affinity: None
    External Traffic Policy: Cluster
    Events: <none>


    BR


    Daniel

  16. Deploy RIC using Recipe failed:

    When running './deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml' it failed with the following message:

    Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz

    Successfully packaged chart and saved it to: /tmp/aux-common-3.0.0.tgz

    Successfully packaged chart and saved it to: /tmp/nonrtric-common-2.0.0.tgz

    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).

    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)

    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).

    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)

    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).

    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)

    ****************************************************************************************************************

                                                         ERROR                                                      

    ****************************************************************************************************************

    Can't locate the ric-common helm package in the local repo. Please make sure that it is properly installed.

    ****************************************************************************************************************

    Running `helm init` as suggested doesn't help. Any idea?

  17. failures on two services: ricplt-e2mgr and ricplt-a1mediator.

    NAMESPACE     NAME                                                         READY   STATUS             RESTARTS   AGE
    kube-system   coredns-5644d7b6d9-6mk6z                                     1/1     Running            5          7d21h
    kube-system   coredns-5644d7b6d9-wxtqf                                     1/1     Running            5          7d21h
    kube-system   etcd-kubernetes-master                                       1/1     Running            5          7d21h
    kube-system   kube-apiserver-kubernetes-master                             1/1     Running            5          7d21h
    kube-system   kube-controller-manager-kubernetes-master                    1/1     Running            5          7d21h
    kube-system   kube-flannel-ds-gb67g                                        1/1     Running            5          7d21h
    kube-system   kube-proxy-hn2sq                                             1/1     Running            5          7d21h
    kube-system   kube-scheduler-kubernetes-master                             1/1     Running            5          7d21h
    kube-system   tiller-deploy-68bf6dff8f-4hlm9                               1/1     Running            9          7d21h
    ricinfra      deployment-tiller-ricxapp-6b6b4c787-ssvts                    1/1     Running            2          2d21h
    ricinfra      tiller-secret-generator-lgpxt                                0/1     Completed          0          2d21h
    ricplt        deployment-ricplt-a1mediator-cb47dc85d-nq4kc                 0/1     CrashLoopBackOff   21         66m
    ricplt        deployment-ricplt-appmgr-5fbcf5c7f7-dsf9q                    1/1     Running            1          2d1h
    ricplt        deployment-ricplt-e2mgr-7dbfbbb796-2nvgz                     0/1     CrashLoopBackOff   6          8m31s
    ricplt        deployment-ricplt-e2term-alpha-8665fc56f6-lcfwr              1/1     Running            4          2d21h
    ricplt        deployment-ricplt-jaegeradapter-85cbdfbfbc-ln7sk             1/1     Running            2          2d21h
    ricplt        deployment-ricplt-o1mediator-86587dd94f-hbs47                1/1     Running            1          2d1h
    ricplt        deployment-ricplt-rtmgr-67c9bdccf6-ckm9s                     1/1     Running            2          2d21h
    ricplt        deployment-ricplt-submgr-6ffd499fd5-z7x8l                    1/1     Running            2          2d21h
    ricplt        deployment-ricplt-vespamgr-68b68b78db-f6mw8                  1/1     Running            2          2d21h
    ricplt        deployment-ricplt-xapp-onboarder-57f78cfdf-4787b             2/2     Running            5          2d21h
    ricplt        r4-infrastructure-kong-84cd44455-rfnq4                       2/2     Running            7          2d21h
    ricplt        r4-infrastructure-prometheus-alertmanager-75dff54776-qzgnq   2/2     Running            4          2d21h
    ricplt        r4-infrastructure-prometheus-server-5fd7695-lsm8r            1/1     Running            2          2d21h

    deployment-ricplt-e2mgr errors:

    {"crit":"INFO","ts":1616189522367,"id":"E2Manager","msg":"#app.main - Configuration {logging.logLevel: info, http.port: 3800, rmr: { port: 3801, maxMsgSize: 65536}, routingManager.baseUrl: http://service-ricplt-rtmgr-http:3800/ric/v1/handles/, notificationResponseBuffer: 100, bigRedButtonTimeoutSec: 5, maxRnibConnectionAttempts: 3, rnibRetryIntervalMs: 10, keepAliveResponseTimeoutMs: 360000, keepAliveDelayMs: 120000, e2tInstanceDeletionTimeoutMs: 0, globalRicId: { ricId: AACCE, mcc: 310, mnc: 411}, rnibWriter: { stateChangeMessageChannel: RAN_CONNECTION_STATUS_CHANGE, ranManipulationChannel: RAN_MANIPULATION}","mdc":{"time":"2021-03-19 21:32:02.367"}}
    dial tcp 127.0.0.1:6379: connect: connection refused
    {"crit":"ERROR","ts":1616189522421,"id":"E2Manager","msg":"#app.main - Failed setting GENERAL key","mdc":{"time":"2021-03-19 21:32:02.421"}}
    

    deployment-ricplt-a1mediator errors:

    {"ts": 1616189735092, "crit": "DEBUG", "id": "a1.run", "mdc": {}, "msg": "A1Mediator starts"}
    {"ts": 1616189735092, "crit": "DEBUG", "id": "a1.run", "mdc": {}, "msg": "Starting RMR thread with RMR_RTG_SVC 4561, RMR_SEED_RT /opt/route/local.rt"}
    {"ts": 1616189735092, "crit": "DEBUG", "id": "a1.run", "mdc": {}, "msg": "RMR initialization must complete before webserver can start"}
    {"ts": 1616189735092, "crit": "DEBUG", "id": "a1.a1rmr", "mdc": {}, "msg": "Waiting for rmr to initialize.."}
    1616189735 1/RMR [INFO] ric message routing library on SI95 p=4562 mv=3 flg=02 (fd4477a 4.5.2 built: Mar 19 2021)
    {"ts": 1616189735593, "crit": "DEBUG", "id": "a1.a1rmr", "mdc": {}, "msg": "Work loop starting"}{"ts": 1616189735593, "crit": "DEBUG", "id": "a1.run", "mdc": {}, "msg": "RMR initialization complete"}{"ts": 1616189735594, "crit": "DEBUG", "id": "a1.run", "mdc": {}, "msg": "Starting gevent webserver on port 10000"}
    1616189736 1/RMR [INFO] sends: ts=1616189736 src=service-ricplt-a1mediator-rmr.ricplt:4562 target=service-ricxapp-admctrl-rmr.ricxapp:4563 open=0 succ=0 fail=0 (hard=0 soft=0)
    [2021-03-19 21:35:42,651] ERROR in app: Exception on /a1-p/healthcheck [GET]
    Traceback (most recent call last):
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/connection.py", line 1185, in get_connection
        connection = self._available_connections.pop()
    IndexError: pop from empty listDuring handling of the above exception, another exception occurred:Traceback (most recent call last):
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
        response = self.full_dispatch_request()
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
        rv = self.handle_user_exception(e)
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
        reraise(exc_type, exc_value, tb)
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
        raise value
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
        rv = self.dispatch_request()
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
        return self.view_functions[rule.endpoint](**req.view_args)
      File "/home/a1user/.local/lib/python3.8/site-packages/connexion/decorators/decorator.py", line 48, in wrapper
        response = function(request)
      File "/home/a1user/.local/lib/python3.8/site-packages/connexion/decorators/uri_parsing.py", line 144, in wrapper
        response = function(request)
      File "/home/a1user/.local/lib/python3.8/site-packages/connexion/decorators/parameter.py", line 121, in wrapper
        return function(**kwargs)
      File "/home/a1user/.local/lib/python3.8/site-packages/a1/controller.py", line 80, in get_healthcheck
        if not data.SDL.healthcheck():
      File "/home/a1user/.local/lib/python3.8/site-packages/ricxappframe/xapp_sdl.py", line 655, in healthcheck
        return self._sdl.is_active()
      File "/home/a1user/.local/lib/python3.8/site-packages/ricsdl/syncstorage.py", line 138, in is_active
        return self.__dbbackend.is_connected()
      File "/home/a1user/.local/lib/python3.8/site-packages/ricsdl/backend/redis.py", line 181, in is_connected
        return self.__redis.ping()
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/client.py", line 1378, in ping
        return self.execute_command('PING')
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/client.py", line 898, in execute_command
        conn = self.connection or pool.get_connection(command_name, **options)
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/connection.py", line 1187, in get_connection
        connection = self.make_connection()
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/connection.py", line 1227, in make_connection
        return self.connection_class(**self.connection_kwargs)
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/connection.py", line 509, in __init__
        self.port = int(port)
    TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
    ::ffff:10.244.0.1 - - [2021-03-19 21:35:42] "GET /a1-p/healthcheck HTTP/1.1" 500 387 0.005291
    [2021-03-19 21:35:43,474] ERROR in app: Exception on /a1-p/healthcheck [GET]
    Traceback (most recent call last):
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/connection.py", line 1185, in get_connection
        connection = self._available_connections.pop()
    IndexError: pop from empty list
    


    Looks like redis is missing, which service starts or runs this?

    1. dbaas is failing, this is the logs i see. I build the dbaas docker image as follows within the ric-plt-dbaas git repo:

      docker build --network=host -f docker/Dockerfile.redis -t ric-plt-dbaas:0.2.3 . 

      logs:

      284:C 23 Mar 2021 20:02:33.612 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
      284:C 23 Mar 2021 20:02:33.612 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=284, just started
      284:C 23 Mar 2021 20:02:33.612 # Configuration loaded
      284:M 23 Mar 2021 20:02:33.613 * Running mode=standalone, port=6379.
      284:M 23 Mar 2021 20:02:33.613 # Server initialized
      284:M 23 Mar 2021 20:02:33.613 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
      284:M 23 Mar 2021 20:02:33.613 * Module 'exstrings' loaded from /usr/local/libexec/redismodule/libredismodule.so
      284:M 23 Mar 2021 20:02:33.614 * Ready to accept connections
      284:signal-handler (1616529780) Received SIGTERM scheduling shutdown...
      284:M 23 Mar 2021 20:03:00.374 # User requested shutdown...
      284:M 23 Mar 2021 20:03:00.374 # Redis is now ready to exit, bye bye...


      I followed the ci-management yaml file for build arguments.

      Any ideas?

      1. ric-plt-dbaas:0.2.3 containz busybox with the command timeout. The -t option is not available in the arm version. Removal of that option in the probes fixes this issue.

  18. Anonymous

    Hello Everyone, 

    I am trying to install the cherry release on my VM. I am able to get all the containers running except for alarm manager.

    This is the error I get after running  ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yam Undo l

    Deleting outdated charts
    Error: render error in "alarmmanager/templates/serviceaccount.yaml": template: alarmmanager/templates/serviceaccount.yaml:20:11: executing "alarmmanager/templates/serviceaccount.yaml" at <include "common.serv...>: error calling include: template: no template "common.serviceaccountname.alarmmanager" associated with template "gotpl"


    When I run kubectl get pods --all-namespaces I get the following output


    NAMESPACE     NAME                                         READY       STATUS     RESTARTS     AGE
    kube-system     coredns-5644d7b6d9-4g4hg       1/1          Running       1               7h49m
    kube-system     coredns-5644d7b6d9-kcp7l         1/1          Running       1               7h49m
    kube-system     etcd-machine2                             1/1           Running       1              7h48m
    kube-system     kube-apiserver-machine2            1/1           Running       1              7h48m
    kube-system     kube-controller-manager-machine2 1/1     Running       1               7h49m
    kube-system     kube-flannel-ds-jtpk4                  1/1           Running       1               7h49m
    kube-system     kube-proxy-lk78t                         1/1           Running       1                7h49m
    kube-system     kube-scheduler-machine2           1/1           Running       1                7h48m
    kube-system     tiller-deploy-68bf6dff8f-n945d    1/1          Running        1                7h48m
    ricinfra               deployment-tiller-ricxapp-d4f98ff65-5bwxw     1/1     Running        0 20m
    ricinfra               tiller-secret-generator-nvpnk      0/1         Completed                       0 20m
    ricplt                 deployment-ricplt-a1mediator-66fcf76c66-wc5l8 1/1 Running          0 18m
    ricplt                 deployment-ricplt-appmgr-6fd6664755-m5tn9    1/1 Running          0 19m
    ricplt                 deployment-ricplt-e2mgr-6dfb6c4988-gwslx        1/1 Running          0 19m
    ricplt                 deployment-ricplt-e2term-alpha-7c8dc7bd94-5jwld 1/1 Running      0 18m
    ricplt                 deployment-ricplt-jaegeradapter-76ddbf9c9-lpg7b 1/1 Running       0 17m
    ricplt                deployment-ricplt-o1mediator-d8b9fcdf-sf5hf          1/1 Running       0 17m
    ricplt                deployment-ricplt-rtmgr-6d559897d8-txr4s               1/1 Running      2 19m
    ricplt               deployment-ricplt-submgr-5c5fb9f65f-x9z5k             1/1  Running      0 18m
    ricplt               deployment-ricplt-vespamgr-7458d9b5d-tp5xb        1/1  Running      0 18m
    ricplt               deployment-ricplt-xapp-onboarder-5958856fc8-46mfb 2/2 Running   0 19m
    ricplt               r4-infrastructure-kong-6c7f6db759-lgjql                    2/2   Running      1 20m
    ricplt               r4-infrastructure-prometheus-alertmanager-75dff54776-z9ztf 2/2  Running        0 20m
    ricplt               r4-infrastructure-prometheus-server-5fd7695-4cg2m 1/1 Running 0 20m

     There are only 14 ricplt pods running


    My guess is that since the tiller-secret-generator-nvpnk is not "Running", the alarmadapter container is not created.


    Any ideas? I am not sure how to proceed next

    1. Anonymous

      Hi, I am stuck at the same point. Were you able to fix this issue?

      1. Anonymous

        I also got same error and stuck at the same point.



        NAMESPACE NAME READY STATUS RESTARTS AGE

        kube-system coredns-5644d7b6d9-hp7pl 1/1 Running 1 88m

        kube-system coredns-5644d7b6d9-vbsf4 1/1 Running 1 88m

        kube-system etcd-rkota-nrtric 1/1 Running 1 87m

        kube-system kube-apiserver-rkota-nrtric 1/1 Running 1 87m

        kube-system kube-controller-manager-rkota-nrtric 1/1 Running 3 88m

        kube-system kube-flannel-ds-kj7fd 1/1 Running 1 88m

        kube-system kube-proxy-dwfmq 1/1 Running 1 88m

        kube-system kube-scheduler-rkota-nrtric 1/1 Running 2 87m

        kube-system tiller-deploy-68bf6dff8f-clzp6 1/1 Running 1 87m

        ricinfra deployment-tiller-ricxapp-d4f98ff65-7dwbf 1/1 Running 0 23m

        ricinfra tiller-secret-generator-dbrts 0/1 Completed 0 23m

        ricplt deployment-ricplt-a1mediator-66fcf76c66-7prjf 1/1 Running 0 20m

        ricplt deployment-ricplt-appmgr-6fd6664755-vfswc 1/1 Running 0 21m

        ricplt deployment-ricplt-e2mgr-6dfb6c4988-5j5mr 1/1 Running 0 20m

        ricplt deployment-ricplt-e2term-alpha-64965c46c6-lhgvp 1/1 Running 0 20m

        ricplt deployment-ricplt-jaegeradapter-76ddbf9c9-vzkbb 1/1 Running 0 19m

        ricplt deployment-ricplt-o1mediator-d8b9fcdf-8gcgp 1/1 Running 0 19m

        ricplt deployment-ricplt-rtmgr-6d559897d8-ts5xh 1/1 Running 6 20m

        ricplt deployment-ricplt-submgr-65bcd95469-v7bc2 1/1 Running 0 20m

        ricplt deployment-ricplt-vespamgr-7458d9b5d-dbqzv 1/1 Running 0 19m

        ricplt deployment-ricplt-xapp-onboarder-5958856fc8-jzp4g 2/2 Running 0 22m

        ricplt r4-infrastructure-kong-6c7f6db759-4hbgh 2/2 Running 1 23m

        ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-vn6d8 2/2 Running 0 23m

        ricplt r4-infrastructure-prometheus-server-5fd7695-gvtm5 1/1 Running 0 23m

        ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 22m


    2. Hi, I also got the problem.  I try to use "git clone http://gerrit.o-ran-sc.org/r/it/dep " to install  Near Realtime RIC. 

      Successful to finish it.   But I don't know how to resolve the problem when use "git clone http://gerrit.o-ran-sc.org/r/it/dep -b cherry" to install.

      Have you resolved the problem?

  19. Anonymous

    Hi all,

    My e2term repeatedly restarts. kubectl describe says the liveness probe has failed and I can verify that by logging into the pod. Below is the kubectl describe and kubectl logs output. I am using the latest source from http://gerrit.o-ran-sc.org/r/it/dep with the included recipe yaml.

    Any ideas where I can look to debug this?

    Thanks.


    root@e2term-alpha:/opt/e2# ./rmr_probe -h 0.0.0.0:38000 -t 10
    1617905013 88/RMR [INFO] ric message routing library on SI95/g mv=3 flg=01 (84423e6 4.4.6 built: Dec 4 2020)
    [FAIL] too few messages recevied during timeout window: wanted 1 got 0


    $ sudo kubectl --namespace=ricplt describe pod/deployment-ricplt-e2term-alpha-57ff7bf4f6-jttpk
    Name: deployment-ricplt-e2term-alpha-57ff7bf4f6-jttpk
    Namespace: ricplt
    Priority: 0
    Node: vm/192.168.122.166
    Start Time: Thu, 08 Apr 2021 14:00:26 -0400
    Labels: app=ricplt-e2term-alpha
    pod-template-hash=57ff7bf4f6
    release=r4-e2term
    Annotations: cni.projectcalico.org/podIP: 10.1.238.220/32
    cni.projectcalico.org/podIPs: 10.1.238.220/32
    Status: Running
    IP: 10.1.238.220
    IPs:
    IP: 10.1.238.220
    Controlled By: ReplicaSet/deployment-ricplt-e2term-alpha-57ff7bf4f6
    Containers:
    container-ricplt-e2term:
    Container ID: containerd://2bb8ecc931f0d972edea35e8a50af818144b937f13f74554917c0bb91ca7499a
    Image: nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2:5.4.8
    Image ID: nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2@sha256:ed8e1ce6214d039b24c3b7426756b6fc947e2c4e99d384c5de1778ae30188251
    Ports: 4561/TCP, 38000/TCP, 36422/SCTP, 8088/TCP
    Host Ports: 0/TCP, 0/TCP, 0/SCTP, 0/TCP
    State: Running
    Started: Thu, 08 Apr 2021 14:05:07 -0400
    Last State: Terminated
    Reason: Error
    Exit Code: 137
    Started: Thu, 08 Apr 2021 14:03:57 -0400
    Finished: Thu, 08 Apr 2021 14:05:07 -0400
    Ready: False
    Restart Count: 4
    Liveness: exec [/bin/sh -c /opt/e2/rmr_probe -h 0.0.0.0:38000] delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness: exec [/bin/sh -c /opt/e2/rmr_probe -h 0.0.0.0:38000] delay=15s timeout=1s period=10s #success=1 #failure=3
    Environment Variables from:
    configmap-ricplt-e2term-env-alpha ConfigMap Optional: false
    Environment: <none>
    Mounts:
    /data/outgoing/ from vol-shared (rw)
    /opt/e2/router.txt from local-router-file (rw,path="router.txt")
    /tmp/rmr_verbose from local-router-file (rw,path="rmr_verbose")
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-24dss (ro)
    Conditions:
    Type Status
    Initialized True
    Ready False
    ContainersReady False
    PodScheduled True
    Volumes:
    local-router-file:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: configmap-ricplt-e2term-router-configmap
    Optional: false
    vol-shared:
    Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName: pvc-ricplt-e2term-alpha
    ReadOnly: false
    default-token-24dss:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-24dss
    Optional: false
    QoS Class: BestEffort
    Node-Selectors: <none>
    Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
    node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal Scheduled 5m7s default-scheduler Successfully assigned ricplt/deployment-ricplt-e2term-alpha-57ff7bf4f6-jttpk to vm
    Normal Pulled 3m56s (x2 over 5m6s) kubelet Container image "nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2:5.4.8" already present on machine
    Normal Created 3m56s (x2 over 5m6s) kubelet Created container container-ricplt-e2term
    Normal Started 3m56s (x2 over 5m6s) kubelet Started container container-ricplt-e2term
    Normal Killing 3m16s (x2 over 4m26s) kubelet Container container-ricplt-e2term failed liveness probe, will be restarted
    Warning Unhealthy 2m55s (x11 over 4m45s) kubelet Readiness probe failed:
    Warning Unhealthy 6s (x13 over 4m46s) kubelet Liveness probe failed:



    $ sudo kubectl --namespace=ricplt logs pod/deployment-ricplt-e2term-alpha-57ff7bf4f6-jttpk
    environments service name is SERVICE_RICPLT_E2TERM_RMR_ALPHA_SERVICE_HOST
    service ip is 10.152.183.120
    nano=38000
    loglevel=error
    volume=log
    #The key name of the environment holds the local ip address
    #ip address of the E2T in the RMR
    local-ip=10.152.183.120
    #prometheus mode can be pull or push
    prometheusMode=pull
    #timeout can be from 5 seconds to 300 seconds default is 10
    prometheusPushTimeOut=10
    prometheusPushAddr=127.0.0.1:7676
    prometheusPort=8088
    #trace is start, stop
    trace=stop
    external-fqdn=10.152.183.120
    #put pointer to the key that point to pod name
    pod_name=E2TERM_POD_NAME
    sctp-port=36422
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = nano=38000 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = nano value = 38000"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = loglevel=error "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = loglevel value = error"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = volume=log "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = volume value = log"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = local-ip=10.152.183.120 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = local-ip value = 10.152.183.120"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusMode=pull "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusMode value = pull"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPushTimeOut=10 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPushTimeOut value = 10"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPushAddr=127.0.0.1:7676 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPushAddr value = 127.0.0.1:7676"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPort=8088 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPort value = 8088"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = trace=stop "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = trace value = stop"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = external-fqdn=10.152.183.120 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = external-fqdn value = 10.152.183.120"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = pod_name=E2TERM_POD_NAME "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = pod_name value = E2TERM_POD_NAME"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = sctp-port=36422 "}
    {"ts":1617905177451,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = sctp-port value = 36422"}
    1617905177 23/RMR [INFO] ric message routing library on SI95/g mv=3 flg=00 (84423e6 4.4.6 built: Dec 4 2020)
    1617905178 23/RMR [INFO] sends: ts=1617905178 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-admission-ctrl-xapp-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905178 23/RMR [INFO] sends: ts=1617905178 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-rsm-rmr.ricplt:4801 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905178 23/RMR [INFO] sends: ts=1617905178 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2mgr-rmr.ricplt:3801 open=1 succ=1 fail=0 (hard=0 soft=0)
    1617905178 23/RMR [INFO] sends: ts=1617905178 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricxapp-ueec-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905178 23/RMR [INFO] sends: ts=1617905178 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-a1mediator-rmr.ricplt:4562 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-admission-ctrl-xapp-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-rsm-rmr.ricplt:4801 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43246 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43606 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2mgr-rmr.ricplt:3801 open=1 succ=1 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricxapp-ueec-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-a1mediator-rmr.ricplt:4562 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43705 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-admission-ctrl-xapp-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-rsm-rmr.ricplt:4801 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43246 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43606 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2mgr-rmr.ricplt:3801 open=1 succ=1 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricxapp-ueec-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-a1mediator-rmr.ricplt:4562 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43705 open=0 succ=0 fail=0 (hard=0 soft=0)

      1. I had the same issue and I was able to mitigate it by changing the readiness/liveness probe timeouts and also the RMR probe timeout

        The readiness/liveness probe runs the rmr_probe command as you could see. First of all the ricxapp-ueec component is part of Nokia (somebody correct me if I am wrong) and is not available on the dawn. One thing I did is removing that routes from the mapping so the probe is not checking for it

        kubectl get cm configmap-ricplt-e2term-router-configmap -n ricplt -o yaml


        In that config map you also have another parameter rmr_verbose and you can activate more debug on the the RMR interface. 

        To troubleshoot e2term, you need to edit the deployment:

        oc edit ds deployment-ricplt-e2term-alpha -n ricplt


        Remove readiness and liveness probe once the pod is up and running:

        kubectl exec -ti deployment-ricplt-e2term-alpha-9c87d678-5s9bt -n ricplt /bin/bash
        kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
        root@e2term-alpha:/opt/e2# ls
        config dockerRouter.txt e2 log rmr_probe router.txt router.txt.stash router.txt.stash.inc startup.sh
        root@e2term-alpha:/opt/e2# ./rmr_probe -h 0.0.0.0:38000 -v
        [INFO] listen port: 43426; sending 1 messages
        1631885462176 5507/RMR [INFO] ric message routing library on SI95 p=43426 mv=3 flg=01 id=a (11838bc 4.7.4 built: Apr 27 2021)
        [INFO] RMR initialised
        [INFO] starting session with 0.0.0.0:38000, starting to send
        [INFO] connected to 0.0.0.0:38000, sending 1 pings
        [INFO] sending message: health check request prev=0 <eom>
        [FAIL] too few messages recevied during timeout window: wanted 1 got 0


        The command above failed but when I run it again with longer timeout:


        root@e2term-alpha:/opt/e2# ./rmr_probe -h 0.0.0.0:38000 -v
        [INFO] listen port: 43716; sending 1 messages
        1631885502853 5550/RMR [INFO] ric message routing library on SI95 p=43716 mv=3 flg=01 id=a (11838bc 4.7.4 built: Apr 27 2021)
        [INFO] RMR initialised
        [INFO] starting session with 0.0.0.0:38000, starting to send
        [INFO] connected to 0.0.0.0:38000, sending 1 pings
        [INFO] sending message: health check request prev=0 <eom>
        [FAIL] too few messages recevied during timeout window: wanted 1 got 0
        root@e2term-alpha:/opt/e2# ./rmr_probe -h 0.0.0.0:38000 -v -t 30
        [INFO] listen port: 43937; sending 1 messages
        1631885508786 5560/RMR [INFO] ric message routing library on SI95 p=43937 mv=3 flg=01 id=a (11838bc 4.7.4 built: Apr 27 2021)
        [INFO] RMR initialised
        [INFO] starting session with 0.0.0.0:38000, starting to send
        [INFO] connected to 0.0.0.0:38000, sending 1 pings
        [INFO] sending message: health check request prev=0 <eom>
        [INFO] got: (OK) state=0
        [INFO] good response received; elapsed time = 299968 mu-sec


        As you can see it's successful. 

        In the end I modified both liveness and readiness probe as such:


        livenessProbe:
          exec:
            command:
              - /bin/sh
              - -c
              - /opt/e2/rmr_probe -h 0.0.0.0:38000 -t 30
          failureThreshold: 5
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 30


        readinessProbe:
          exec:
            command:
              - /bin/sh
              - -c
              - /opt/e2/rmr_probe -h 0.0.0.0:38000 -t 30
          failureThreshold: 3
          initialDelaySeconds: 20
          periodSeconds: 60
          successThreshold: 1
          timeoutSeconds: 30


        Adjust it based on your need, for some reason I saw failures even with "-t 30". Hope it helps.

        1. Hi Federico,

          thank you very much.


          it worked for me in the Dawn release, but as you told sometimes the readiness and the liveliness probes fail anyway.

          Maybe i will try to increase failureTresholds for both the probes, but for the moment i commented ricxapp-ueec routes in the config map as you suggested and edited the deployment as follow:


          livenessProbe:
          exec:
          command:
          - /bin/sh
          - -c
          - /opt/e2/rmr_probe -h 0.0.0.0:38000 -t 30
          failureThreshold: 3
          initialDelaySeconds: 120
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 60
          ................................

          readinessProbe:
          exec:
          command:
          - /bin/sh
          - -c
          - /opt/e2/rmr_probe -h 0.0.0.0:38000 -t 30
          failureThreshold: 3
          initialDelaySeconds: 120
          periodSeconds: 60
          successThreshold: 1
          timeoutSeconds: 60

          _______________________________________________________


          I still have problems while deploying xApps with the Dawn release, at least the curl command tell me this:
          curl --location --request POST "http://$(hostname):32080/appmgr/ric/v1/xapps" --header 'Content-Type: application/json' --data-raw '{"xappName": "hwxapp"}' -v


          Note: Unnecessary use of -X or --request, POST is already inferred.
          * Trying 127.0.1.1...
          * TCP_NODELAY set
          * Connected to HOSTNAME (127.0.1.1) port 32080 (#0)
          > POST /appmgr/ric/v1/xapps HTTP/1.1
          > Host: HOSTNAME:32080
          > User-Agent: curl/7.58.0
          > Accept: */*
          > Content-Type: application/json
          > Content-Length: 22
          >
          * upload completely sent off: 22 out of 22 bytes
          < HTTP/1.1 501 Not Implemented
          < Content-Type: application/json
          < Content-Length: 56
          < Connection: keep-alive
          < Date: Fri, 17 Sep 2021 16:22:50 GMT
          < X-Kong-Upstream-Latency: 2
          < X-Kong-Proxy-Latency: 3
          < Via: kong/1.4.3
          <
          "operation XappDeployXapp has not yet been implemented"
          * Connection #0 to host HOSTNAME left intact


          Did you manage to install xApps or this feature is still unavailable for Dawn release?


          Thank you again

          1. Glad it worked for you, still doesn't explain why the probe fails or takes that much time that hits the timeouts, I've been trying to troubleshoot but haven't got to the bottom of it yet.

            The xApps deployment procedure changed in Dawn release. Follow instructions here: https://docs.o-ran-sc.org/projects/o-ran-sc-it-dep/en/latest/installation-guides.html#ric-applications  use the dms_cli and it will work. I was able to deploy xApps.

            1. Unfortunately I am also stuck in troubleshooting probe fails, i've not so much experience in k8s.

              And thank you again, i probably was missing a step, now i've managed to deploy xapps through dms_cli .

              Have a good time!


  20. Hello, I have installed RIC onto my linux machine, and I am about to put the SMO on a VM.  I've noticed people mention that they have two VM's  (one for RIC and one for SMO), and I was wondering if there is a particular reason for this?  Thanks.

    1. I have the exact same question...

    2. Hello again Mooga,

      One Question: Are you running the bronze release or the cherry release?

      Thank You.

      1. Hello, i'm running bronze release

        1. Thank you for your response!

          Since you have successfully installed RIC, in step 4 when you run this command:

          kubectl get pods -n ricplt

          You got 15 pods like me or 16 pods like the RIC guidelines?

          Because i am stuck in the Step 5 ... The following command doesnt produce any result and i don´t know if these issues could be related...

          curl --location--request POST "http://$(hostname):32080/onboard/api/v1/onboard/download"  --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"

          You followed all the guidelines with no problems?

          Thank you Mooga.

          1. I have 16 pods running, but I'm not sure if these problems are related or not.  Try using the address of the Kong controller rather than $(hostname).  This can be found using:

            kubectl get service -A | grep 32080

  21. Anonymous

    Hello all, 

    I am setting up the Cherry Release for the first time and working of Cherry branch.  Had the HELM Repository error when trying step 4 and got past that by adding the stable charts repository. Retried the deploy-ric-platform command and am getting this error at the end: 

    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    Error: render error in "alarmmanager/templates/serviceaccount.yaml": template: alarmmanager/templates/serviceaccount.yaml:20:11: executing "alarmmanager/templates/serviceaccount.yaml" at <include "common.serv...>: error calling include: template: no template "common.serviceaccountname.alarmmanager" associated with template "gotpl"

    Could anyone help with this please?

    1. I have looked into their repository.

      The problem is in cherry branch, the alarmmanager is called the other name, called "alarmadapter".

      So it can't find the name "alarmmanager", and can't pull it correctly.

      In the step1, you should directly pull the master branch, they already solved the problem in master branch.

      But they didn't merge into cherry branch.

      Or, another solution is change "alarmadapter" to "alrammanager" in ric-common/Common-Template/helm/ric-common/templates manually.

    2. There was an update in the discuss mailing list that addressed this issue. After cloning the cherry branch and updating the submodules, one can run this command to get the alarmmanager pod up and running:

      git fetch "https://gerrit.o-ran-sc.org/r/it/dep" refs/changes/68/5968/1 && git checkout FETCH_HEAD

      This was a patch so hopefully it still works.

    3. Anonymous

      Same problem,

      Can anyone help?

  22. Hi everyone,

    I know this question has already posted here.

    When i run the following command in Step 4:

    kubectl get pods -n ricplt


    Produces this result:


    root@rafaelmatos-VirtualBox:~/ric/dep# kubectl get pods -n ricplt

    NAME READY STATUS RESTARTS AGE

    deployment-ricplt-a1mediator-66fcf76c66-mlnb5 1/1 Running 2 3d2h

    deployment-ricplt-alarmadapter-64d559f769-5dgxh 1/1 Running 2 3d2h

    deployment-ricplt-appmgr-6fd6664755-zcdmr 1/1 Running 2 3d2h

    deployment-ricplt-e2mgr-8479fb5ff8-drvdz 1/1 Running 2 3d2h

    deployment-ricplt-e2term-alpha-bcb457df4-6tjlc 1/1 Running 3 3d2h

    deployment-ricplt-jaegeradapter-84558d855b-dj97n 1/1 Running 2 3d2h

    deployment-ricplt-o1mediator-d8b9fcdf-g7f2m 1/1 Running 2 3d2h

    deployment-ricplt-rtmgr-9d4847788-xwdcz 1/1 Running 4 3d2h

    deployment-ricplt-submgr-65dc9f4995-5vzwc 1/1 Running 2 3d2h

    deployment-ricplt-vespamgr-7458d9b5d-ppmrs 1/1 Running 2 3d2h

    deployment-ricplt-xapp-onboarder-546b86b5c4-52jjl 2/2 Running 4 3d2h

    r4-infrastructure-kong-6c7f6db759-8snpr 2/2 Running 5 3d2h

    r4-infrastructure-prometheus-alertmanager-75dff54776-pjr7j 2/2 Running 4 3d2h

    r4-infrastructure-prometheus-server-5fd7695-5k4hw 1/1 Running 2 3d2h

    statefulset-ricplt-dbaas-server-0 1/1 Running 2 3d2h

    All the pods are in status Running but i have only 15 pods and not 16.

    Is this a problem?? I am doing the configuration in Bronze release.

    Rafael Matos



  23. Hi everyone,

    I followed this article step by step, but I think I can't get the step 4 works correctly. As you can see the results form kubectl get pods -n ricplt one of the pods keeps crashing:

    NAME                                                         READY   STATUS             RESTARTS   AGE
    deployment-ricplt-a1mediator-66fcf76c66-lbzg9                1/1     Running            0          10m
    deployment-ricplt-alarmadapter-64d559f769-s576w              1/1     Running            0          9m42s
    deployment-ricplt-appmgr-6fd6664755-d99vn                    1/1     Running            0          11m
    deployment-ricplt-e2mgr-8479fb5ff8-b6gb9                     1/1     Running            0          11m
    deployment-ricplt-e2term-alpha-bcb457df4-llzd7               1/1     Running            0          10m
    deployment-ricplt-jaegeradapter-84558d855b-rpx4m             1/1     Running            0          10m
    deployment-ricplt-o1mediator-d8b9fcdf-hqkcc                  1/1     Running            0          9m54s
    deployment-ricplt-rtmgr-9d4847788-d756d                      1/1     Running            6          11m
    deployment-ricplt-submgr-65dc9f4995-zwndb                    1/1     Running            0          10m
    deployment-ricplt-vespamgr-7458d9b5d-99bwf                   1/1     Running            0          10m
    deployment-ricplt-xapp-onboarder-546b86b5c4-6qj82            2/2     Running            0          11m
    r4-infrastructure-kong-6c7f6db759-vw7pr                      1/2     CrashLoopBackOff   6          12m
    r4-infrastructure-prometheus-alertmanager-75dff54776-d2hcs   2/2     Running            0          12m
    r4-infrastructure-prometheus-server-5fd7695-46g4f            1/1     Running            0          12m
    statefulset-ricplt-dbaas-server-0                            1/1     Running            0          11m


    Step 5 doesn't seem to work while I have this problem. I have been looking for a solution on the internet but couldn't find any.

    I'll appreciate any help.

    1. Hello xlaxc,

      usually port 32080 is occupied by some kube service. you can check this by :

      $ kubectl get service -A | grep 32080

      #A work around is to use port forwarding :

      $ sudo kubectl port-forward r4-infrastructure-kong-6c7f6db759-vw7pr 32088:32080 -n ricplt &

      #remember to change "$(hostname):32080" to "localhost:32088" in subsequent commands. also make sure all pods are running


      1. Hello Litombe,

        Thanks for your help, I really appreciate it.

        The port forwarding solution doesn't solve the crashing problem for me (I am still having that CrashLoopBackOff status).

        I tried to run the command from step 5 anyway by executing:
        curl --location --request POST "http://localhost:32088/onboard/api/v1/onboard/download--header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"

        But I am getting this result:
        {"message":"no Route matched with those values"}

        1. Hello Xlaxc,

          I thought you were facing a problem with onboarding of the apps. My earlier solution was with regards to that.

          I recently faced the 'crashloopbackoff' problem in a different k8 cluster and this fixed it.

            rm -rf ~/.helm  
            helm init --stable-repo-url=https://charts.helm.sh/stable --client-only
            helm serve &
            helm repo add local http://127.0.0.1:8879

          #try this before step 4. it might fix your problems.

          1. Hello Litombe,

            Thanks for your reply.

            Unfortunately, that didn't solve my problem. I am starting to think that there is some network configuration in my VM that I am missing.

        2. Anonymous

          Hello xlaxc,

          Did you resolve?? this I'm also facing the same issue

    2. Hi, I think you have installed SMO in the same machine.

      SMO and RIC can not be installed in the same machine.

      You can see this from  Getting Started page's "Figure 1: Demo deployment architecture" picture.

      There are two VM to installed SMO and RIC

      1. Hi,
        Thanks for your reply,

        I didn't install SMO at all. For now, I only tried to install RIC, but I can't get it to work.


  24. mhz

    Hello everyone,

    I am stuck in step 3. I ran the following command ./k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh , and it is pending forever as it is shown below.

    I would appreciate some help. Thank you very much

    --------------------

    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    + NUMPODS=0
    + echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
    > waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
    + '[' 0 -lt 8 ']'
    + sleep 5
    ++ eval 'kubectl get pods -n kube-system  | grep "Running" | wc -l'
    +++ grep Running
    +++ wc -l
    +++ kubectl get pods -n kube-system
    W0601 15:30:40.385897   24579 loader.go:221] Config not found: /root/.kube/config
    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    + NUMPODS=0
    + echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
    > waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
    + '[' 0 -lt 8 ']'
    + sleep 5

    1. I had the same problem. Just check f there is any fatal error. In my case I allocated 1 CPU that was the problem.

  25. Anonymous

    Even I am having the same problem.


    1. mhz

      I tried to install Docker first and then follow the instructions and it worked.

      1. Anonymous

        I get the following error in the same step 3:


        Failed to pull image "quay.io/coreos/flannel:v0.14.0": rpc response from daemon: Get https://quay.io/v2/: Forbidden


        It only runs 5 of the 8 pods, so I get stuck there. Anyone knows how to solve it


    2. I was also stuck in the same issue and found below error

      Setting up docker.io (20.10.2-0ubuntu1~18.04.2) ...
      Job for docker.service failed because the control process exited with error code.
      See "systemctl status docker.service" and "journalctl -xe" for details.
      invoke-rc.d: initscript docker, action "start" failed.
      ● docker.service - Docker Application Container Engine
      Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
      Active: activating (auto-restart) (Result: exit-code) since Sat 2021-06-12 13:42:16 UTC; 9ms ago
      Docs: https://docs.docker.com
      Process: 13474 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
      Main PID: 13474 (code=exited, status=1/FAILURE)

      Then i have removed the docker as following

      • sudo apt remove docker.io
      • sudo apt purge docker.io
      • in case /var/lib/docker or /var/lib/docker.migating still exist, remove them


      After this steps i tried again and it was successful.

  26. Hello everyone,

    i got some error,

    ricxapp-trafficxapp-58d4946955-xrcll 0/1 ErrImagePull


    this ricxapp-trafficxapp keep error image pull, did anyone faced same issue?


    Br,

    Junior

  27. Hi everyone,

    I've installed the cherry release platform and have everything running well except onboarding the xApps.  I'm currently stuck on onboarding xApps using dms_cli.  The error that I am getting for the hw app and even the dummy xapp in the test folder is: "xApp Chart Name not found".  The error that I receive with the config file in the guide folder is: "Input validation failed".

    Is anyone else having these difficulties?  I've performed the dms_cli onboarding step on two different computers, but receiving the same error on both.  I've also used json scheme validators to make sure the xapp's config file is validated by the schma file and it passes.  

    The steps that I've performed are on this page: Installation Guides — it-dep master documentation (o-ran-sc.org)

    Any thoughts?

    Thank you!

  28. Anonymous

    Hello everyone,
    I`m having the issue when trying to run a1-ric.sh script from demo folder:

    output1
    {
        "error_message": "'xapp_name' is a required property",
        "error_source": "config-file.json",
        "status": "Input payload validation failed"
    }

    but qpdriver json template is fully valid:

    qpdriver.json config file
    {
          "xapp_name": "qpdriver",
          "version": "1.1.0",
          "containers": [
              {
                  "name": "qpdriver",
                  "image": {
                      "registry": "nexus3.o-ran-sc.org:10002",
                      "name": "o-ran-sc/ric-app-qp-driver",
                      "tag": "1.0.9"
                  }
              }
          ],
          "messaging": {
              "ports": [
                  {
                    "name": "rmr-data",
                    "container": "qpdriver",
                    "port": 4560,
                    "rxMessages": [
                      "TS_UE_LIST"
                    ],
                    "txMessages": [ "TS_QOE_PRED_REQ", "RIC_ALARM" ],
                    "policies": [20008],
                    "description": "rmr receive data port for qpdriver"
                  },
                  {
                    "name": "rmr-route",
                    "container": "qpdriver",
                    "port": 4561,
                    "description": "rmr route port for qpdriver"
                  }
              ]
          },
          "controls": {
              "example_int": 10000,
              "example_str": "value"
          },
          "rmr": {
              "protPort": "tcp:4560",
              "maxSize": 2072,
              "numWorkers": 1,
              "rxMessages": [
                "TS_UE_LIST"
              ],
              "txMessages": [
                "TS_QOE_PRED_REQ",
                "RIC_ALARM"
              ],
              "policies": [20008]
          }
    }

    What could the reason of such output?

  29. Hello Everyone!


    I followed this article step by step, but I think I can't get the step 4 works correctly. As you can see the results form kubectl get pods -n ricplt one of the pods keeps ImagePullBackOff and soon will be ErrImagePull.


    deployment-ricplt-a1mediator-74b45794bb-jvnbq 1/1 Running 0 3m53s

    deployment-ricplt-alarmadapter-64d559f769-szmc5 1/1 Running 0 2m41s

    deployment-ricplt-appmgr-6fd6664755-6lbsc 1/1 Running 0 4m51s

    deployment-ricplt-e2mgr-6df485fcc-zhxmg 1/1 Running 0 4m22s

    deployment-ricplt-e2term-alpha-76848bd77c-z7rdb 1/1 Running 0 4m8s

    deployment-ricplt-jaegeradapter-84558d855b-ft528 1/1 Running 0 3m9s

    deployment-ricplt-o1mediator-d8b9fcdf-zjgwf 1/1 Running 0 2m55s

    deployment-ricplt-rtmgr-57999f4bc4-qzkhk 1/1 Running 0 4m37s

    deployment-ricplt-submgr-b85bd46c6-4jql4 1/1 Running 0 3m38s

    deployment-ricplt-vespamgr-77cf6c6d57-vbtz5 1/1 Running 0 3m24s

    deployment-ricplt-xapp-onboarder-79866d9d9c-fnbw2 2/2 Running 0 5m6s

    r4-infrastructure-kong-6c7f6db759-rcfft 1/2 ImagePullBackOff 0 5m35s

    r4-infrastructure-prometheus-alertmanager-75dff54776-4rhqm 2/2 Running 0 5m35s

    r4-infrastructure-prometheus-server-5fd7695-bxkzb 1/1 Running 0 5m35s

    statefulset-ricplt-dbaas-server-0 1/1 Running 0 5m20s

    1. Try this

      "kubectl edit deploy r4-infrastructure-kong -n ricplt"

      and in line 68, change the original line into "image: kong/kubernetes-ingress-controller:0.7.0"

      Tbh, I don't know this is the appropriate way or not, but it can solve "ImagePullBackOff" problem.

      1. Anonymous

        Hi,

        Thanks for your reply, it totally works.

      2. Before deploying u can also make the below change

        in file "dep/ric-dep/helm/infrastructure/subcharts/kong/values.yaml"

        diff --git a/helm/infrastructure/subcharts/kong/values.yaml b/helm/infrastructure/subcharts/kong/values.yaml
        index 3d64d8c..37cff3f 100644
        --- a/helm/infrastructure/subcharts/kong/values.yaml
        +++ b/helm/infrastructure/subcharts/kong/values.yaml
        @@ -181,7 +181,7 @@ dblessConfig:
        ingressController:
        enabled: true
        image:
        - repository: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller
        + repository: kong/kubernetes-ingress-controller
        tag: 0.7.0

        # Specify Kong Ingress Controller configuration via environment variables

  30. Anonymous

    Hi experts,

    I tried to deploy Near-RT RIC with D release.

    After  I onboarding the hw xapp, then I ran this command:

    curl --location --request POST "http://127.0.0.1:32088/appmgr/ric/v1/xapps"  --header 'Content-Type: application/json' --data-raw '{"xappName": "hwxapp"}'

    (The port 32080  is occupied by the r4-infrastructure-kong-proxy, so I use port-forwarding.)

    And the output is

    "operation XappDeployXapp has not yet been implemented."

    It cannot create instance now. 

    So it indicates that the new appmgr may be under construction ?




    1. where did you pull D release ?
      master branch?

      1. Anonymous

        I just deploy near-rt ric platform with  dep/ric-dep/RECIPE/example_recipe_oran_dawn_release.yaml

        you can use 

        git submodule update --init --recursive --remote  to get the update file : ric-dep


  31. Anonymous

    Hello Team!

    When I am trying to verify the results of the step 4 with the comand kubectl get pods -n ricplt, one of all the pods of RIC have the status ImagePullBackOff, I am installing the cherry version. Do someone know how to fix it? Thank you all...


    NAME                                                         READY   STATUS             RESTARTS   AGE
    deployment-ricplt-a1mediator-55fdf8b969-v545f                1/1     Running            0          14m
    deployment-ricplt-appmgr-65d4c44c85-4kjk9                    1/1     Running            0          16m
    deployment-ricplt-e2mgr-95b7f7b4-7d4h4                       1/1     Running            0          15m
    deployment-ricplt-e2term-alpha-7dc47d54-c8gg4                0/1     CrashLoopBackOff   7          15m
    deployment-ricplt-jaegeradapter-7f574b5d95-v5gm5             1/1     Running            0          14m
    deployment-ricplt-o1mediator-8b6684b7c-vhlr8                 1/1     Running            0          13m
    deployment-ricplt-rtmgr-6bbdf7685b-sgvwc                     1/1     Running            2          15m
    deployment-ricplt-submgr-7754d5f8bc-kf6kk                    1/1     Running            0          14m
    deployment-ricplt-vespamgr-589bbb7f7b-zt4zm                  1/1     Running            0          14m
    deployment-ricplt-xapp-onboarder-7f6bf9bfb-4lrjz             2/2     Running            0          16m
    r4-infrastructure-kong-86bfd9f7c5-zkblz                      1/2     ImagePullBackOff   0          16m
    r4-infrastructure-kong-bdff668dd-fjr7k                       0/2     Running            1          8s
    r4-infrastructure-prometheus-alertmanager-54c67c94fd-nxn42   2/2     Running            0          16m
    r4-infrastructure-prometheus-server-6f5c784c4c-x54bb         1/1     Running            0          16m
    statefulset-ricplt-dbaas-server-0                            1/1     Running            0          16m



    1. Anonymous

      I just want how to solve the problem of the pod whose STATUS is CrashLoopBackOff

    2. Anonymous

      And the case where the pod`s description is CrashLoopBackOff??

      Do anyone knows how to solve it?

    3. For Kong, you will find answer above in comments.

      additionally, please find this command:
      KUBE_EDITOR="nano" kubectl edit deploy r4-infrastructure-kong -n ricplt.

      "azsx9015223@gmail.com

      Try this

      "kubectl edit deploy r4-infrastructure-kong -n ricplt"

      and in line 68, change the original line into "image: kong/kubernetes-ingress-controller:0.7.0"

      Tbh, I don't know this is the appropriate way or not, but it can solve "ImagePullBackOff" problem."

    4. Hi, I'm having the same issue with e2term-alpha. Did you manage to solve it?

      1. Try to change image for e2term-for to DAWN release version, maybe it will help you.
        D Release Docker Image List

        You can also update other modules with this images.

        1. I already tried to install it with Dawn recipe examples, which contain this images, but get the same error

  32. Anonymous

    Hello Team,

    I am trying to deploy the RIC in it's Cherry-Release Version.

    But i am facing some issues:

    Error: template: influxdb/templates/statefulset.yaml:19:11: executing "influxdb/templates/statefulset.yaml" at <include "common.fullname.influxdb" .>: error calling include: template: no template "common.fullname.influxdb" associated with template "gotpl"


    Can anyone help me out? Thanks for the solution


    1. Anonymous

      Hi, I am also facing the same error, when deploying the RIC in Cherry Release. 

      Any solution will help

      1. I was also facing the same problem. So instead of cherry or bronze release, we cloned master

        git clone http://gerrit.o-ran-sc.org/r/it/dep -b bronze 
        git clone http://gerrit.o-ran-sc.org/r/it/dep -b master


        With master release use the recipe file of cherry release. This error will not appear.

        1. Any update if this is fixed in cherry ? I am getting the same issue in master branch now. Any tip on the root cause will help  me to find a resolution. The problem that I am facing is that the persistent volume creation for "ricplt-influxdb-data-ricplt-influxdb-meta-0" never completes and stuck with status as "pending". Tried resizing the volume and setting storageClass: ""  and other similar changes but never got it up.

          Thanks,

          1. Hello, I also try to deploy the near-RT RIC with the Cherry version and face the same issue.

            Finally, I find the instructions in the document, which provided a solution to this issue:

            https://github.com/o-ran-sc/it-dep/blob/master/docs/ric/installation-k8s1node.rst

            Please find the corresponding steps in the section "Onetime setup for Influxdb".


  33. Anonymous

    Hello,

    I am trying to run the following command:

    curl --location --request POST "http://localhost:32088/onboard/api/v1/onboard/download" --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"

    I'm always getting the following error:

    AttributeError: 'ConnectionError' object has no attribute 'message'

    Any idea about the solution?


    1. Same error occured.

      Do someone know how to fix it?

      1. Which release do you use?
        And does every pods operates normally?

        1. Cherry release. and every pods operates normally.

          I'm using proxy server, so I was typed 「curl --noproxy <address> 」

    2. Apparently the repository at https://gerrit.o-ran-sc.org/r/gitweb?p=ric-app/hw.git;a=blob_plain;f=init/config-file.json;hb=HEAD (pointed in the file ./onboard.hw.url) is taking some time to answer and that is enough to timeout the onboarding manager.
      I manually downloaded the content of the file and hosted on my own host (like https://dev-mind.blog/tmp/hwxapp.json) than you can assemble the request yourself as

      curl -v --request POST "http://10.0.2.15:32080/onboard/api/v1/onboard/download" --header 'Content-Type: application/json' --data '{ "config-file.json_url": "https://www.dev-mind.blog/tmp/hwxapp.json" }'

      PS.: I'll leave the file on this address in case someone want to try.

      1. Some times it seems to timeout on my server also... maybe a local copy would work best

  34. Hi! Can you run any version of the RIC in an ubuntu 20.04 machine? Thanks!!

  35. Hi all,

    i'm seeing only 15 pods and one of the pods is in error state after Step 4:  Deploy RIC using Recipe

    Could you please let me know what needs to be done to resolve this ?

    kubectl get pods -n ricplt
    NAME READY STATUS RESTARTS AGE
    deployment-ricplt-a1mediator-66fcf76c66-sgm8v 1/1 Running 0 20m
    deployment-ricplt-alarmadapter-64d559f769-vtr2r 1/1 Running 0 18m
    deployment-ricplt-appmgr-6fd6664755-tcqbj 1/1 Running 0 21m
    deployment-ricplt-e2mgr-8479fb5ff8-7d8vt 1/1 Running 0 20m
    deployment-ricplt-e2term-alpha-bcb457df4-j5j68 1/1 Running 0 20m
    deployment-ricplt-jaegeradapter-84558d855b-c47nk 1/1 Running 0 19m
    deployment-ricplt-o1mediator-d8b9fcdf-272v2 1/1 Running 0 18m
    deployment-ricplt-rtmgr-9d4847788-pt8hh 1/1 Running 2 21m
    deployment-ricplt-submgr-65dc9f4995-jdzfb 1/1 Running 0 19m
    deployment-ricplt-vespamgr-7458d9b5d-ww82v 1/1 Running 0 19m
    deployment-ricplt-xapp-onboarder-546b86b5c4-j74sg 2/2 Running 0 21m
    r4-infrastructure-kong-6c7f6db759-78glg 1/2 ImagePullBackOff 0 22m
    r4-infrastructure-prometheus-alertmanager-75dff54776-w8s2s 2/2 Running 0 22m
    r4-infrastructure-prometheus-server-5fd7695-jcxzn 1/1 Running 0 22m
    statefulset-ricplt-dbaas-server-0 1/1 Running 0 21m

    1. for Kong, please find info above in comments.

    2. Go here: https://gerrit.o-ran-sc.org/r/c/ric-plt/ric-dep/+/6502/1/helm/infrastructure/subcharts/kong/values.yaml


      You need to change the repository in the values.yaml file that is in /ric-dep/helm/infrastructure/subcharts/kong/

  36. Hi all,
    maybe somone met this problem or know solution, could you help?

    ~/dep/testappc# kubectl get pods --all-namespaces
    ricplt-influxdb-meta-0 0/1 Pending

    kubectl -n ricplt describe pod ricplt-influxdb-meta-0
    Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims

    Onetime setup for Influxdb was done during installation.

    1. Yes.

      # kubectl -n ricplt describe pod ricplt-influxdb-meta-0
      Name: ricplt-influxdb-meta-0
      Namespace: ricplt
      Priority: 0
      Node: <none>
      Labels: app.kubernetes.io/instance=r4-influxdb
      app.kubernetes.io/name=influxdb
      controller-revision-hash=ricplt-influxdb-meta-6fcf7b7df8
      statefulset.kubernetes.io/pod-name=ricplt-influxdb-meta-0
      Annotations: <none>
      Status: Pending
      IP:
      Controlled By: StatefulSet/ricplt-influxdb-meta
      Containers:
      ricplt-influxdb:
      Image: influxdb:2.0.8-alpine
      Ports: 8086/TCP, 8088/TCP
      Host Ports: 0/TCP, 0/TCP
      Liveness: http-get http://:api/ping delay=30s timeout=5s period=10s #success=1 #failure=3
      Readiness: http-get http://:api/ping delay=5s timeout=1s period=10s #success=1 #failure=3
      Environment: <none>
      Mounts:
      /etc/influxdb from config (rw)
      /var/lib/influxdb from ricplt-influxdb-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from service-ricplt-influxdb-http-token-bmgcl (ro)
      Conditions:
      Type Status
      PodScheduled False
      Volumes:
      ricplt-influxdb-data:
      Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
      ClaimName: ricplt-influxdb-data-ricplt-influxdb-meta-0
      ReadOnly: false
      config:
      Type: ConfigMap (a volume populated by a ConfigMap)
      Name: ricplt-influxdb
      Optional: false
      service-ricplt-influxdb-http-token-bmgcl:
      Type: Secret (a volume populated by a Secret)
      SecretName: service-ricplt-influxdb-http-token-bmgcl
      Optional: false
      QoS Class: BestEffort
      Node-Selectors: <none>
      Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
      node.kubernetes.io/unreachable:NoExecute for 300s
      Events:
      Type Reason Age From Message
      ---- ------ ---- ---- -------
      Warning FailedScheduling 89s (x34 over 50m) default-scheduler pod has unbound immediate PersistentVolumeClaims

      #####################################################



      1. Anonymous

        Hi, I am also facing the same problem!

        $ kubectl get pods -n ricplt

        ricplt-influxdb-meta-0 0/1 Pending

        $ kubectl describe pod/ricplt-influxdb-meta-0 -n ricplt

        Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims


        Could anyone help with this please?

        Thanks.

        1. i saw on instructions videos about RIC that this status "pending" is normal for this pod.

    2. Hello, I also try to deploy the near-RT RIC with the Cherry version and face the same issue.

      Finally, I find the instructions in the document, which provided a solution to this issue:

      https://github.com/o-ran-sc/it-dep/blob/master/docs/ric/installation-k8s1node.rst

      Please find the corresponding steps in the section "Onetime setup for Influxdb".

  37. had some issue with installation had to redo it...seeing issue withStep 3: Installation of Kubernetes, Helm, Docker, etc.

    it seems kubernetes-helm/helm-v2.12.3-linux-amd64.tar.gz is not available.  

    can we use a different version of helm ?


    + wget https://storage.googleapis.com/kubernetes-helm/helm-v2.12.3-linux-amd64.tar.gz--2021-08-10 04:41:15-- https://storage.googleapis.com/kubernetes-helm/helm-v2.12.3-linux-amd64.tar.gzResolving storage.googleapis.com (storage.googleapis.com)... 108.177.111.128, 108.177.121.128, 142.250.103.128, ...Connecting to storage.googleapis.com (storage.googleapis.com)|108.177.111.128|:443... connected.HTTP request sent, awaiting response... 404 Not Found2021-08-10 04:41:16 ERROR 404: Not Found.

    1. It looks like helm v2.12.3 is not in service.
      For anyone with this error, I suggest use v2.17.0 would solve the problem.
      You can change the version number in "/dep/tools/k8s/bin/k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh"

  38. Hi All.

    I got some trouble.

    after run STEP4 :

    ↓ kubectl get pod --all-namespaces

    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5d4dd4b4db-6dqls 1/1 Running 0 6d23h
    kube-system coredns-5d4dd4b4db-vz5qq 1/1 Running 0 6d23h
    kube-system etcd-20020420-0 1/1 Running 0 6d23h
    kube-system kube-apiserver-20020420-0 1/1 Running 0 6d23h
    kube-system kube-controller-manager-20020420-0 1/1 Running 0 6d23h
    kube-system kube-flannel-ds-k9kt2 1/1 Running 0 6d23h
    kube-system kube-proxy-6zng4 1/1 Running 0 6d23h
    kube-system kube-scheduler-20020420-0 1/1 Running 0 6d23h
    kube-system tiller-deploy-7c54c6988b-56brh 1/1 Running 0 6d23h
    ricinfra deployment-tiller-ricxapp-5d9b4c5dcf-7ksc9 0/1 ImagePullBackOff 0 6d23h
    ricinfra tiller-secret-generator-bfdrk 0/1 Completed 0 6d23h
    ricplt deployment-ricplt-a1mediator-55fdf8b969-bk8nr 1/1 Running 0 6d23h
    ricplt deployment-ricplt-appmgr-65d4c44c85-h64bf 1/1 Running 0 6d23h
    ricplt deployment-ricplt-e2mgr-95b7f7b4-5j9hm 1/1 Running 0 6d23h
    ricplt deployment-ricplt-e2term-alpha-7dc47d54-q9m74 1/1 Running 0 6d23h
    ricplt deployment-ricplt-jaegeradapter-7f574b5d95-7qqlw 1/1 Running 0 6d23h
    ricplt deployment-ricplt-o1mediator-8b6684b7c-gzcll 1/1 Running 0 6d23h
    ricplt deployment-ricplt-rtmgr-6bbdf7685b-wcd4f 0/1 CrashLoopBackOff 1259 6d23h
    ricplt deployment-ricplt-submgr-7754d5f8bc-8gsx6 1/1 Running 0 6d23h
    ricplt deployment-ricplt-vespamgr-589bbb7f7b-dnq98 1/1 Running 0 6d23h
    ricplt deployment-ricplt-xapp-onboarder-7f6bf9bfb-slwdx 2/2 Running 0 6d23h
    ricplt r4-infrastructure-kong-bdff668dd-xp8zb 2/2 Running 2 6d23h
    ricplt r4-infrastructure-prometheus-alertmanager-54c67c94fd-tkq67 2/2 Running 0 6d23h
    ricplt r4-infrastructure-prometheus-server-6f5c784c4c-29jpp 1/1 Running 0 6d23h
    ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 6d23h

    and NO 「deployment-ricplt-alarmadapter」,「ricplt-influxdb-data-ricplt-influxdb-meta-0」.


    Do someone know how to fix it?

    Thank you all.


    1. If u are trying with bronze release, shift to dawn release and try it


      git clone http://gerrit.o-ran-sc.org/r/it/dep -b dawn

      1. Hi Kota.

        I tried with Cherry release.

  39. i'm also facing the same issue infra is ricinfra deployment-tiller-ricxapp-5d9b4c5dcf-7ksc9 0/1 ImagePullBackOff 0 6d23h 

    i was able to onboard an xapp, but xapp installation is failing

    image.png

    However, I'm facing issue while installing the xapp.. Any suggestions on what needs to be done for the same ?


    image.png


    Thanks

    1. if it on dawn,  For the tiller,  GCR does not host the required version anymore. So, I copied the image from some older VM and then it came up properly.

    2. Hi Deena,

      This is caused by the missing of the image, you should do the modification of the xml.

      step 1.

      "kubectl edit deploy r4-infrastructure-kong -n ricplt"

      Step 2.

      edit line 68, change it to "image: kong/kubernetes-ingress-controller:0.7.0"

      This may solve your issue.

    3. Hi Deena.

      Finally I fixed it !!

      Step1:
      kubectl describe pod deployment-tiller-ricxapp -n ricinfra
      #Check your image version.

      Image: gcr.io/kubernetes-helm/tiller:v2.12.3
      #Check this line.

      Step2:
      docker images | grep tiller
      ghcr.io/helm/tiller v2.17.0 3f39089e9083 10 months ago 88.1MB
      #If not same version, follow Step3.

      Step3:
      kubectl edit deploy deployment-tiller-ricxapp -n ricinfra
      #Type your podname,namespace

      Step4:
      image: gcr.io/kubernetes-helm/tiller:v2.12.3
      #Edit this line, correct version(my case v2.17.0)

      Step5:
      #After 5minutes,
      kubectl get pods --all-namespaces
      #STATUS has changed ImagePullBackOff to Running

      :)


      1. Thanks for your solution !
        So is tiller v2.12.3 out of service?

        1. v2.12.3 wasn't working before it was fixed.

      2. Thanks !!

        But it didn't work for me .\

        i followed the steps suggested by you...still 

        kubectl get pods -n ricinfra
        NAME READY STATUS RESTARTS AGE
        deployment-tiller-ricxapp-d4f98ff65-c6qzq 0/1 ImagePullBackOff 0 14m
        deployment-tiller-ricxapp-d54c6ddc5-w7cj2 0/1 ImagePullBackOff 0 8m46s
        tiller-secret-generator-rnng9 0/1 Completed 0 14m

        Any further suggestions..

        1. what's your helm version?
          also check the tiller pods in namespace "kube-system" is running or not.

          1. hi,

            I also trying to install cherry version and have the same problem and the changes were not helped.

            the helm version is 2.17.0

            This is the current position of the pods:

            kube-system coredns-5644d7b6d9-76789 1/1 Running 1 135m

            kube-system coredns-5644d7b6d9-rw42w 1/1 Running 1 135m
            kube-system etcd-ip-172-31-35-51 1/1 Running 1 135m
            kube-system kube-apiserver-ip-172-31-35-51 1/1 Running 1 134m
            kube-system kube-controller-manager-ip-172-31-35-51 1/1 Running 1 135m
            kube-system kube-flannel-ds-tvmqj 1/1 Running 1 135m
            kube-system kube-proxy-bpq4l 1/1 Running 1 135m
            kube-system kube-scheduler-ip-172-31-35-51 1/1 Running 1 134m
            kube-system tiller-deploy-7d7bc87bb-rqj98 1/1 Running 1 134m
            ricinfra deployment-tiller-ricxapp-d4f98ff65-htckm 0/1 ImagePullBackOff 0 110m
            ricinfra deployment-tiller-ricxapp-d54c6ddc5-qc9qc 0/1 ImagePullBackOff 0 4m3s
            ricinfra tiller-secret-generator-v2tsq 0/1 Completed 0 110m
            ricplt deployment-ricplt-a1mediator-77959694b7-9z69l 1/1 Running 0 108m
            ricplt deployment-ricplt-appmgr-6bbcfd6bcb-qfvl7 1/1 Running 0 109m
            ricplt deployment-ricplt-e2mgr-7dc9dbb475-pppcr 1/1 Running 0 108m
            ricplt deployment-ricplt-e2term-alpha-c69456686-2whkx 1/1 Running 0 108m
            ricplt deployment-ricplt-jaegeradapter-76ddbf9c9-jxdbp 1/1 Running 0 107m
            ricplt deployment-ricplt-o1mediator-6f97d649cb-qqhb8 1/1 Running 0 107m
            ricplt deployment-ricplt-rtmgr-8546d559c6-7p6xn 0/1 CrashLoopBackOff 16 109m
            ricplt deployment-ricplt-submgr-5c5fb9f65f-sgdx7 1/1 Running 0 108m
            ricplt deployment-ricplt-vespamgr-dd97696fc-lkqd9 1/1 Running 0 107m
            ricplt deployment-ricplt-xapp-onboarder-5958856fc8-gsbk5 2/2 Running 0 109m
            ricplt r4-infrastructure-kong-646b68bd88-bjjnv 2/2 Running 1 8m45s
            ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-4kqnx 2/2 Running 0 110m
            ricplt r4-infrastructure-prometheus-server-5fd7695-rb5lt 1/1 Running 0 110m
            ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 109m

            thanks for helping



      3. I just found it doesn't not only simply change the version.
        Change the image (in line 53) into "ghcr.io/helm/tiller:v2.17.0".
        It works to me.

  40. Anonymous

    Do you guys have a Slack Channel ?

  41. Anonymous

    I am trying to deploy (RIC). Since I use the helm3 the above solutions for similar issue are not applicable.

    Note: I have tried with Bronze, Cherry and Amber. The error is the same.

    $ ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml

    Error: no repo named "local" found

    Error: looks like "http://127.0.0.1:8879/charts" is not a valid chart repository or cannot be reached: Get "http://127.0.0.1:8879/charts/index.yaml": dial tcp 127.0.0.1:8879: connect: connection refused

    ****************************************************************************************************************

    ERROR

    ****************************************************************************************************************

    Can't locate the ric-common helm package in the local repo. Please make sure that it is properly installed.

    ****************************************************************************************************************

    Can somebody help me?

    1. What's you output of this command ?

      $ helm repo list

  42. hi,

    I also trying to install cherry version and have the same problem and the changes were not helped.

    the helm version is 2.17.0

    This is the current position of the pods:

    kube-system coredns-5644d7b6d9-76789 1/1 Running 1 135m

    kube-system coredns-5644d7b6d9-rw42w 1/1 Running 1 135m
    kube-system etcd-ip-172-31-35-51 1/1 Running 1 135m
    kube-system kube-apiserver-ip-172-31-35-51 1/1 Running 1 134m
    kube-system kube-controller-manager-ip-172-31-35-51 1/1 Running 1 135m
    kube-system kube-flannel-ds-tvmqj 1/1 Running 1 135m
    kube-system kube-proxy-bpq4l 1/1 Running 1 135m
    kube-system kube-scheduler-ip-172-31-35-51 1/1 Running 1 134m
    kube-system tiller-deploy-7d7bc87bb-rqj98 1/1 Running 1 134m
    ricinfra deployment-tiller-ricxapp-d4f98ff65-htckm 0/1 ImagePullBackOff 0 110m
    ricinfra deployment-tiller-ricxapp-d54c6ddc5-qc9qc 0/1 ImagePullBackOff 0 4m3s
    ricinfra tiller-secret-generator-v2tsq 0/1 Completed 0 110m
    ricplt deployment-ricplt-a1mediator-77959694b7-9z69l 1/1 Running 0 108m
    ricplt deployment-ricplt-appmgr-6bbcfd6bcb-qfvl7 1/1 Running 0 109m
    ricplt deployment-ricplt-e2mgr-7dc9dbb475-pppcr 1/1 Running 0 108m
    ricplt deployment-ricplt-e2term-alpha-c69456686-2whkx 1/1 Running 0 108m
    ricplt deployment-ricplt-jaegeradapter-76ddbf9c9-jxdbp 1/1 Running 0 107m
    ricplt deployment-ricplt-o1mediator-6f97d649cb-qqhb8 1/1 Running 0 107m
    ricplt deployment-ricplt-rtmgr-8546d559c6-7p6xn 0/1 CrashLoopBackOff 16 109m
    ricplt deployment-ricplt-submgr-5c5fb9f65f-sgdx7 1/1 Running 0 108m
    ricplt deployment-ricplt-vespamgr-dd97696fc-lkqd9 1/1 Running 0 107m
    ricplt deployment-ricplt-xapp-onboarder-5958856fc8-gsbk5 2/2 Running 0 109m
    ricplt r4-infrastructure-kong-646b68bd88-bjjnv 2/2 Running 1 8m45s
    ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-4kqnx 2/2 Running 0 110m
    ricplt r4-infrastructure-prometheus-server-5fd7695-rb5lt 1/1 Running 0 110m
    ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 109m



    the describe of the pod gives those errors:

    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal Scheduled <unknown> default-scheduler Successfully assigned ricinfra/deployment-tiller-ricxapp-d54c6ddc5-qc9qc to ip-172-31-35-51
    Normal Pulling 42m (x4 over 44m) kubelet, ip-172-31-35-51 Pulling image "gcr.io/kubernetes-helm/tiller:v2.17.0"
    Warning Failed 42m (x4 over 44m) kubelet, ip-172-31-35-51 Failed to pull image "gcr.io/kubernetes-helm/tiller:v2.17.0": rpc error: code = Unknown desc = Error response from daemon: manifest for gcr.io/kubernetes-helm/tiller:v2.17.0 not found: manifest unknown: Failed to fetch "v2.17.0" from request "/v2/kubernetes-helm/tiller/manifests/v2.17.0".
    Warning Failed 42m (x4 over 44m) kubelet, ip-172-31-35-51 Error: ErrImagePull
    Normal BackOff 14m (x131 over 44m) kubelet, ip-172-31-35-51 Back-off pulling image "gcr.io/kubernetes-helm/tiller:v2.17.0"
    Warning Failed 3m55s (x175 over 44m) kubelet, ip-172-31-35-51 Error: ImagePullBackOff


    also i need to know how to solve the crashLoop in rtmgr

    this is what the describe function give:

    Name: deployment-ricplt-rtmgr-8546d559c6-7p6xn
    Namespace: ricplt
    Priority: 0
    Node: ip-172-31-35-51/172.31.35.51
    Start Time: Wed, 01 Sep 2021 09:47:37 +0000
    Labels: app=ricplt-rtmgr
    pod-template-hash=8546d559c6
    release=r4-rtmgr
    Annotations: <none>
    Status: Running
    IP: 10.244.0.17
    IPs:
    IP: 10.244.0.17
    Controlled By: ReplicaSet/deployment-ricplt-rtmgr-8546d559c6
    Containers:
    container-ricplt-rtmgr:
    Container ID: docker://9e81991d7f9b4ae6525190d84936abe7941426f93f98b813bcc53b18b58d36a7
    Image: nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-rtmgr:0.7.2
    Image ID: docker-pullable://nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-rtmgr@sha256:2c435042fcbced8073774bda0e581695b3a44170da249c91a33b8147bd7e485f
    Ports: 3800/TCP, 4561/TCP, 4560/TCP
    Host Ports: 0/TCP, 0/TCP, 0/TCP
    Command:
    /run_rtmgr.sh
    State: Running
    Started: Wed, 01 Sep 2021 12:26:36 +0000
    Last State: Terminated
    Reason: Completed
    Exit Code: 0
    Started: Wed, 01 Sep 2021 12:18:36 +0000
    Finished: Wed, 01 Sep 2021 12:21:28 +0000
    Ready: True
    Restart Count: 23
    Liveness: http-get http://:8080/ric/v1/health/alive delay=5s timeout=1s period=15s #success=1 #failure=3
    Readiness: http-get http://:8080/ric/v1/health/ready delay=5s timeout=1s period=15s #success=1 #failure=3
    Environment Variables from:
    configmap-ricplt-rtmgr-env ConfigMap Optional: false
    configmap-ricplt-dbaas-appconfig ConfigMap Optional: false
    Environment: <none>
    Mounts:
    /cfg from rtmgrcfg (ro)
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-kr5jr (ro)
    Conditions:
    Type Status
    Initialized True
    Ready True
    ContainersReady True
    PodScheduled True
    Volumes:
    rtmgrcfg:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: configmap-ricplt-rtmgr-rtmgrcfg
    Optional: false
    default-token-kr5jr:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-kr5jr
    Optional: false
    QoS Class: BestEffort
    Node-Selectors: <none>
    Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
    node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Warning BackOff 7m26s (x423 over 156m) kubelet, ip-172-31-35-51 Back-off restarting failed container

    thanks for helping

  43. I did a git clone of master(git clone http://gerrit.o-ran-sc.org/r/it/dep -b master) and deployed using the recipe file of cherry. I was able to onboard the hw-python xapp. 

    { "status": "Created"}root@instance-1:~/ric-app-hw-python/init# curl -X GET http://localhost:8080/api/charts | jq . % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8080: Connection refusedroot@instance-1:~/ric-app-hw-python/init# curl -X GET http://localhost:9090/api/charts | jq . % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 289 100 289 0 0 57800 0 --:--:-- --:--:-- --:--:-- 57800{ "hw-python": [ { "name": "hw-python", "version": "1.0.0", "description": "Standard xApp Helm Chart", "apiVersion": "v1", "appVersion": "1.0", "urls": [ "charts/hw-python-1.0.0.tgz" ], "created": "2021-09-02T16:06:17.722327489Z", "digest": "3d250253cb65f2e6f6aafaa3bfb64939b0ed177830a144d77408fc0e203518bd" } ]}


    Then  I used the below command for deploying the xapp. It failed and thr rtmgr crashed.

    curl -v --location --request POST "http://$(hostname):32
    080/appmgr/ric/v1/xapps" --header 'Content-Type: application/json' --data-raw '{"xappName": "hw-
    python", "helmVersion": "1.0.0"}' 


    NAME READY STATUS RESTARTS
    AGE
    deployment-ricplt-a1mediator-9fc67867-fz526 1/1 Running 0
    57m
    deployment-ricplt-alarmmanager-7685b476c8-7wl6r 1/1 Running 0
    56m
    deployment-ricplt-appmgr-6bbcfd6bcb-8tf84 1/1 Running 0
    59m
    deployment-ricplt-e2mgr-7dc9dbb475-wxfmx 1/1 Running 0
    58m
    deployment-ricplt-e2term-alpha-c69456686-ckcr9 1/1 Running 0
    58m
    deployment-ricplt-jaegeradapter-76ddbf9c9-dvkmk 1/1 Running 0
    57m
    deployment-ricplt-o1mediator-6f97d649cb-zgwcz 1/1 Running 0
    56m
    deployment-ricplt-rtmgr-8546d559c6-q264z 0/1 CrashLoopBackOff 10
    58m
    deployment-ricplt-submgr-5c5fb9f65f-ndd9h 1/1 Running 0
    57m
    deployment-ricplt-vespamgr-dd97696fc-ns2cc 1/1 Running 0
    57m
    deployment-ricplt-xapp-onboarder-5958856fc8-9jn5j 2/2 Running 0
    59m
    r4-infrastructure-kong-646b68bd88-gbz7c 2/2 Running 1
    59m
    r4-infrastructure-prometheus-alertmanager-75dff54776-pfkls 2/2 Running 0
    59m
    r4-infrastructure-prometheus-server-5fd7695-rgg9q 1/1 Running 0
    59m
    ricplt-influxdb-meta-0 0/1 Pending 0
    56m
    statefulset-ricplt-dbaas-server-0 1/1 Running 0
    59m

  44. Hi,

    I am new to ORAN. I followed the steps to install the near RT RIC, but the master Kubernetes node stayed in,

    root@near-rt-ric:/home/ubuntu# kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    near-rt-ric NotReady master 44m v1.16.0
    root@near-rt-ric:/home/ubuntu#


    NotReady state forever, so the installation could not proceed. Could somebody point out anything wrong? I provisioned 4 CPU, 16G memory and 160G disk for the VM as requested.


    Regards,

    Harry




     


  45. Hi Guys,

    I finally got the near rt ric fully deployed with only one container, r4-infrastructure-kong, in ImagePullBackOff state, due to,

    docker pull kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.7.0
    Error response from daemon: error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>403 Forbidden</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>403 Forbidden</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"
     

    Could anybody tell me whether there is a fix or workaround for the issue? I an installing the bronze release following the instructions from this document. 

    kubectl describe,

    root@near-rt-ric:/home/ubuntu# kubectl describe pod r4-infrastructure-kong-6c7f6db759-q4c2b -n ricplt
    Name: r4-infrastructure-kong-6c7f6db759-q4c2b
    Namespace: ricplt
    Priority: 0
    Node: near-rt-ric/192.168.166.238
    Start Time: Mon, 13 Sep 2021 19:29:17 +0000
    Labels: app.kubernetes.io/component=app
    app.kubernetes.io/instance=r4-infrastructure
    app.kubernetes.io/managed-by=Tiller
    app.kubernetes.io/name=kong
    app.kubernetes.io/version=1.4
    helm.sh/chart=kong-0.36.6
    pod-template-hash=6c7f6db759
    Annotations: <none>
    Status: Pending
    IP: 10.244.0.11
    IPs:
    IP: 10.244.0.11
    Controlled By: ReplicaSet/r4-infrastructure-kong-6c7f6db759
    Containers:
    ingress-controller:
    Container ID:
    Image: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.7.0
    Image ID:
    Port: <none>
    Host Port: <none>
    Args:
    /kong-ingress-controller
    --publish-service=ricplt/r4-infrastructure-kong-proxy
    --ingress-class=kong
    --election-id=kong-ingress-controller-leader-kong
    --kong-url=https://localhost:8444
    --admin-tls-skip-verify
    State: Waiting
    Reason: ImagePullBackOff
    Ready: False
    Restart Count: 0
    Liveness: http-get http://:10254/healthz delay=5s timeout=5s period=10s #success=1 #failure=3
    Readiness: http-get http://:10254/healthz delay=5s timeout=5s period=10s #success=1 #failure=3
    Environment:
    POD_NAME: r4-infrastructure-kong-6c7f6db759-q4c2b (v1:metadata.name)
    POD_NAMESPACE: ricplt (v1:metadata.namespace)
    Mounts:
    /var/run/secrets/kubernetes.io/serviceaccount from r4-infrastructure-kong-token-7dckz (ro)
    proxy:
    Container ID: docker://ee52bf99859431b57452287eaeba7d5c5a4deaff751ab1c7fbe1273bb9ed9154
    Image: kong:1.4
    Image ID: docker-pullable://kong@sha256:b77490f37557628122ccfcdb8afae54bb081828ca464c980dadf69cafa31ff7c
    Ports: 8444/TCP, 32080/TCP, 32443/TCP, 9542/TCP
    Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
    State: Running
    Started: Mon, 13 Sep 2021 20:33:06 +0000
    Ready: True
    Restart Count: 0
    Liveness: http-get http://:metrics/status delay=5s timeout=5s period=10s #success=1 #failure=3
    Readiness: http-get http://:metrics/status delay=5s timeout=5s period=10s #success=1 #failure=3
    Environment:
    KONG_LUA_PACKAGE_PATH: /opt/?.lua;;
    KONG_ADMIN_LISTEN: 0.0.0.0:8444 ssl
    KONG_PROXY_LISTEN: 0.0.0.0:32080,0.0.0.0:32443 ssl
    KONG_NGINX_DAEMON: off
    KONG_NGINX_HTTP_INCLUDE: /kong/servers.conf
    KONG_PLUGINS: bundled
    KONG_ADMIN_ACCESS_LOG: /dev/stdout
    KONG_ADMIN_ERROR_LOG: /dev/stderr
    KONG_ADMIN_GUI_ACCESS_LOG: /dev/stdout
    KONG_ADMIN_GUI_ERROR_LOG: /dev/stderr
    KONG_DATABASE: off
    KONG_NGINX_WORKER_PROCESSES: 1
    KONG_PORTAL_API_ACCESS_LOG: /dev/stdout
    KONG_PORTAL_API_ERROR_LOG: /dev/stderr
    KONG_PREFIX: /kong_prefix/
    KONG_PROXY_ACCESS_LOG: /dev/stdout
    KONG_PROXY_ERROR_LOG: /dev/stderr
    Mounts:
    /kong from custom-nginx-template-volume (rw)
    /kong_prefix/ from r4-infrastructure-kong-prefix-dir (rw)
    /tmp from r4-infrastructure-kong-tmp (rw)
    /var/run/secrets/kubernetes.io/serviceaccount from r4-infrastructure-kong-token-7dckz (ro)
    Conditions:
    Type Status
    Initialized True
    Ready False
    ContainersReady False
    PodScheduled True
    Volumes:
    r4-infrastructure-kong-prefix-dir:
    Type: EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit: <unset>
    r4-infrastructure-kong-tmp:
    Type: EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit: <unset>
    custom-nginx-template-volume:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: r4-infrastructure-kong-default-custom-server-blocks
    Optional: false
    r4-infrastructure-kong-token-7dckz:
    Type: Secret (a volume populated by a Secret)
    SecretName: r4-infrastructure-kong-token-7dckz
    Optional: false
    QoS Class: BestEffort
    Node-Selectors: <none>
    Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
    node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Warning Failed 10m (x1431 over 5h42m) kubelet, near-rt-ric Error: ImagePullBackOff
    Normal BackOff 31s (x1475 over 5h42m) kubelet, near-rt-ric Back-off pulling image "kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.7.0"
    root@near-rt-ric:/home/ubuntu# docker pull kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.7.0
    Error response from daemon: error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>403 Forbidden</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>403 Forbidden</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"
    root@near-rt-ric:/home/ubuntu#


    Regards,

    Harry

    1. The bintray.io repository is deprecated. This commit https://gerrit.o-ran-sc.org/r/c/ric-plt/ric-dep/+/6502 fixes the issue.

      In the meantime you can change helm/infrastructure/subcharts/kong/values.yaml and point the ingress controller image to:

      kong/kubernetes-ingress-controller

      1. Hi Federico,

        That fixed the problem. Thank you very much.

        May I ask whether I can follow the same procedure to deploy the Cherry release too?


        Regards,

        Harry


        1. Yes same procedure, the ingress-controller is the same version in Cherry.

  46. Anonymous

    Hello everyone,  I want to deploy dawn release. When I am using   git clone http://gerrit.o-ran-sc.org/r/it/dep -b dawn , i got  

    fatal: Remote branch dawn not found in upstream origin ,  please help me how can i get the dawn release

    1. No need of specifying the branch, you control which release is deployed using the RECIPES.

      dep/ric-dep/RECIPE_EXAMPLE

      $ ls
      example_recipe_oran_cherry_release.yaml example_recipe_oran_dawn_release.yaml example_recipe.yaml


      You use the branch when you want to download specific components and want to build the container images yourself.

      1. Federico Rossi So. which branch can I clone ? -b master or -b cherry or do not specify -branch name. I only want to deploy the dawn release. I have seen example_recipe_oran_dawn_release.yaml. After installation, is there any command so that we can check the installed release is dawn or not.

        1. Do not specify any -branch name. I am not aware of any command that would tell you the release you are running.

          You just need to make sure the images version matches the "Dawn" release: Near-RT RIC (D release)

  47. Anonymous

    Hello everyone, is there any command so that we can find out which release is currently deployed on my machine. is it cherry or dawn ? i took  git clone of branch master

  48. Hello everyone. I installed RIC   a few months ago and I did not receive any errors (16 pods were there). I started everything now and I am having problems. The problem is that some of the status are shown as "EVICTED". I have searched for it here but it seems nobody has faced this issue before. Do you have any idea?


    Thank you.

    1. Please post the output of kubectl get pods and for the "evicted" pods also a kubectl describe pod PODNAME

      1. Hi!


        Trying to deploy nearRT RIC, the pod e2term-alpha goes into CrashLoopBack, this is the log of e2term-alpha pod:


        environments service name is SERVICE_RICPLT_E2TERM_RMR_ALPHA_SERVICE_HOST
        
        service ip is 10.96.90.31
        
        nano=38000
        
        loglevel=debug
        
        volume=log
        
        #the key name of the environment holds the local ip address
        
        #ip address of the E2T in the RMR
        
        local-ip=10.96.90.31
        
        #prometheus mode can be pull or push
        
        prometheusMode=pull
        
        #timeout can be from 5 seconds to 300 seconds default is 10
        
        prometheusPushTimeOut=10
        
        prometheusPushAddr=127.0.0.1:7676
        
        prometheusPort=8088
        
        #trace is start, stop
        
        trace=start
        
        external-fqdn=10.96.90.31
        
        #put pointer to the key that point to pod name
        
        pod_name=E2TERM_POD_NAME
        
        sctp-port=36422
        
        ./startup.sh: line 16:    23 Illegal instruction     (core dumped) ./e2 -p config -f config.conf


        I'm using these versions: 

        Docker 19.03.6 (i've tried with version 20 also)

        Helm: 2.17

        kubernetes: 1.16


        Any idea why I get this error?



        1. Which release are you installing? 

          What's the e2term image version? 

          kubectl get pod deployment-ricplt-e2term-alpha-9c87d678-5s9bt -n ricplt -o jsonpath='{.spec.containers[0].image}'


          Change the pod name to match yours.

          1. Anonymous

            Hi, this is the version:


            nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2:5.4.9


            1. Interesting. You are running the Dawn image and e2 is segfaulting.. I suspect something related to the runtime environment. I am running the same image without issues. Can you please try with a newer kubernetes version? 

              In my case I am running Dawn on OpenShift and kubernetes version is 1.21



              1. Version 1.21 is not supported in the k8s-1node-cloud-init... but I tried v1.18 and I get the same error. Is there any way to debug this error? 

                Thanks for the answers.

                1. Finally managed to solve this! It was indeed a problem with my environment... 


                  To anyone using PROXMOX and getting CrashLoopBack with e2-term-alpha, you have to add the following line in the VM configuration file: 


                  args: -cpu host



                  You can find the config files inside the proxmox server,  /etc/pve/qemu-server 

                  1. Nicely done! thank you for sharing.

  49. When installating "Dawn Release", if authentication and DefaultUser is enabled for InfluxDB, ORAN-SC NRTRIC's InfluxDB fails to come up with following error: "Error: release r4-influxdb failed: the server could not find the requested resource"


    But, the same helm changes work on the vanila influxdb helm charts provided by InfluxData at https://github.com/influxdata/helm-charts/tree/master/charts/influxdb.

    Kindly help in resolving this.

    Diff of InfluxDB helm chanages
    diff --git a/helm/3rdparty/influxdb/values.yaml b/helm/3rdparty/influxdb/values.yaml
    index 2b494a4..206e3a7 100644
    --- a/helm/3rdparty/influxdb/values.yaml
    +++ b/helm/3rdparty/influxdb/values.yaml
    @@ -55,7 +55,7 @@ service:
     ## Persist data to a persistent volume
     ##
     persistence:
    -  enabled: true
    +  enabled: false
       ## A manually managed Persistent Volume and Claim
       ## Requires persistence.enabled: true
       ## If defined, PVC must be created manually before volume will be bound
    @@ -102,7 +102,7 @@ enterprise:
     ## Defaults indicated below
     ##
     setDefaultUser:
    -  enabled: false
    +  enabled: true
    
       ## Image of the container used for job
       ## Default: appropriate/curl:latest
    @@ -239,7 +239,9 @@ config:
       retention: {}
       shard_precreation: {}
       monitor: {}
    -  http: {}
    +  http:
    +    flux-enabled: true
    +    auth-enabled: true
       logging: {}
       subscriber: {}
       graphite: {}
    1. Please set debug and provide ricplt-influxdb-meta-0 pod logs. On values.yaml in the config section:


      values.yaml
      config:
        logging:
          level: "debug"
      
      1. Hi Federico Rossi,
        Thanks for you reply. I tried above as well.

        But, the issue is that helm deployment itself fails the moment defaultuser authentication is enabled in the influxdb values.yaml. So, no influxdb pod logs are available. Helm deployment just gives the error "Error: release r4-influxdb failed: the server could not find the requested resource"

        root@ubuntu18-2:~/code/oransc/tmp/dep/bin# helm ls --failed
        NAME            REVISION        UPDATED                         STATUS  CHART           APP VERSION     NAMESPACE
        r4-influxdb     1               Wed Sep 22 10:14:19 2021        FAILED  influxdb-4.9.14 1.8.4           ricplt


        I am doubting the script "templates/post-install-set-auth.yaml" is failing to execute when both authentication and defaultuser is enabled. 

        values.yaml
         setDefaultUser:
        -  enabled: false
        +  enabled: true

        On the other hand, when authentication is enabled with defaultuser disabled, the influxdb comes up properly. But, it is of no use without the defaultuser credentials.

        Are you aware of any workaround/fix for this? 

        Regards,
        Ganesh Jaju

        1. From the ric-dep directory uninstall the chart:

          # ls
          3rdparty alarmmanager dbaas e2term infrastructure o1mediator rsm submgr xapp-onboarder
          a1mediator appmgr e2mgr influxdb jaegeradapter redis-cluster rtmgr vespam

          helm uninstall r4-influxdb 


          Install again add debug:


          helm --debug install r4-influxdb 3rdparty/influxdb/


          Will hopefully show you a little bit more details about the error. Let me know.


          1. What fails is this curl in the job on post-install-set-auth.yaml as you said:

            curl -X POST http://{{ include "common.fullname.influxdb" . }}:{{ include "common.serviceport.influxdb.http.bind_address" . | default 8086 }}/query \
            --data-urlencode \
            "q=CREATE USER \"${INFLUXDB_USER}\" WITH PASSWORD '${INFLUXDB_PASSWORD}' {{ .Values.setDefaultUser.user.privileges }}"


            Most likely because InfluxDB is not fully up when the job is running, we can tune the backoffLimit and deadline.

            By default deadline is configured to 300 seconds and backofflimit to 6.. so that should be plenty of time... the config is on values.yaml

            Just for testing, you can also try installing influxDB without the auth active once influx DB is active, exec on a container and do a manual curl to create the user with the password to see if it works.

            1. Hi Federico Rossi,
              I tried both of the suggestions from your end.


              1) When default user is disabled but authentication enabled, the influxdb comes up & we can manually curl to influxdb service ip to create the default admin user.

              After this the influxdb can be accessed and DBs created/deleted, etc. But, like you mentioned this is a temporary check/fix.

              # kubectl get services -A -o wide| grep influx
              default       ricplt-influxdb                             ClusterIP   10.96.42.99      <none>        8086/TCP,8088/TCP                 96s     app.kubernetes.io/instance=right-umbrellabird,app.kubernetes.io/name=influxdb
              
              # curl -XPOST http://10.96.42.99:8086/query --data-urlencode "q=create user admin with password 'admin' with all privileges"
              {"results":[{"statement_id":0}]}


              2) I tried playing with  activeDeadline and backoffLimit variables in values.yaml for the setDefaultUser. But, none seems to work.
              Somehow the pod for creating secret and the default user is failing.

              To give comparison with vanila helm charts from InfluxData team at https://github.com/influxdata/helm-charts/tree/master/charts/influxdb. If I use them, the following additional items are created and executed. The crashing pod keeps backing off untill it succeeds to set authentication and then exits successfully.

              Additional things in vanilla influxdb helm charts
              #kubectl get secret -A | grep influx
              default           exciting-kitten-influxdb-auth                           Opaque                                2      6s
              default           exciting-kitten-influxdb-token-6lmvj                    kubernetes.io/service-account-token   3      6s
              
              #kubectl get pods -A | grep -i influx
              NAMESPACE     NAME                                                         READY   STATUS             RESTARTS   AGE
              default       exciting-kitten-influxdb-0                                   0/1     Running            0          16s
              default       exciting-kitten-influxdb-set-auth-wjqbc                      0/1     CrashLoopBackOff   1          16s


              3) By default, dawn release installs helm version 2.17.0 and so I am using the same. I cant see much in the helm install output even with the debug option.

              helm debug install
              root@ubuntu18-2:~/code/oransc/tmp/dep/ric-dep/helm/3rdparty/influxdb# helm --debug install .
              [debug] Created tunnel using local port: '37099'
              
              [debug] SERVER: "127.0.0.1:37099"
              
              [debug] Original chart version: ""
              [debug] CHART PATH: /root/code/oransc/tmp/dep/ric-dep/helm/3rdparty/influxdb
              
              Error: release dining-marmot failed: the server could not find the requested resource


              Any further inputs/help is appreciated. 

              Regards,
              Ganesh Jaju

              1. I checked it. My error when activating the auth is different from yours, not sure is related but here my findings. Once you set setDefaultUser to true, it loads post-install-set-auth.yaml and secret.yaml 

                If we run:

                 helm template influx . -s templates/post-install-set-auth.yaml 

                and

                 helm template influx . -s templates/secret.yaml


                You'll see parsed template has apiVersion field where the comment line is. The reason is because the template is using '{{- }}' the '-' cause to strip all whitespaces and the next non-whitespace character. The fix is to change '{{-' to '{{'.


                post-install-set-auth.yaml

                From:

                {{- if .Values.setDefaultUser.enabled -}}
                apiVersion: batch/v1
                kind: Job

                To:

                {{ if .Values.setDefaultUser.enabled -}}
                apiVersion: batch/v1
                kind: Job


                secrets.yaml

                From:

                {{- if .Values.setDefaultUser.enabled -}}
                {{- if not (.Values.setDefaultUser.user.existingSecret) -}}
                apiVersion: v1
                kind: Secret
                
                

                To:

                {{ if .Values.setDefaultUser.enabled -}}
                {{- if not (.Values.setDefaultUser.user.existingSecret) -}}
                apiVersion: v1
                kind: Secret


                Try it and let me know if it works for you.

                1. Thanks a lot Federico Rossi. The fix worked. This should be pushed to Dawn branch too if possible.

  50. I compiled the new info and fixes that might help someone with the process. If you follow today (22-9-2021) the steps, some of the files in the repository do not work because some of the data is outdated (for instance links to images and versions on googleapi).

    This sequence worked for me:

    (be advised that some docker images take a bit of time to download. Hence, some pods might crash before everyone is ready. This is expected and once all the pods are properly initialised, all will be up and running).

    1. Make sure you have Ubuntu 18.04
    2.  follow steps 1 and 2 in the article
    3.  Before step 3:
      1. Edit the file tools/k8/bin/k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh 
      2.  change helm version from 2.12.3  to 2.17.0 
      3. change location of the helm image from
        https://storage.googleapis.com/kubernetes-helm/helm-v${HELMVERSION}-linux-amd64.tar.gz
        to 
        https://get.helm.sh/helm-v${HELMVERSION}-linux-amd64.tar.gz


    4. Edit the files ric-dep/helm/infrastructure/subcharts/kong/values.yaml and ric-aux/helm/infrastructure/subcharts/kong/values.yaml

      1. replace

        kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller

        by

        kong/kubernetes-ingress-controller 


    5. Edit the files ric-dep/helm/infrastructure/values.yaml and ric-dep/helm/appmgr/values.yaml
      1. replace lines 

        registry: gcr.io
        name: kubernetes-helm/tiller
        tag: v2.12.3

        by

        registry: ghcr.io
        name: helm/tiller
        tag: v2.17.0


    6. continue normally with step 3 and beyond

    Also, in the steps involving curl , you might need to use your local ip address instead of ${hostname} to make sure your local machine will answer without messing up the ports with the services.


    Hope it helps

  51. I tried deploying master branch with dawn recipe file but the influxdb pod is in Pending state. Can someone help me fix this .


    NAME READY STATUS RESTARTS AGE
    deployment-ricplt-a1mediator-b4576889d-dqs2b 1/1 Running 7 7d6h
    deployment-ricplt-alarmmanager-f59846448-76tsl 1/1 Running 4 7d6h
    deployment-ricplt-appmgr-7cfbff4f7d-8gkmh 1/1 Running 4 7d6h
    deployment-ricplt-e2mgr-67f97459cd-4xjdz 1/1 Running 3 2d2h
    deployment-ricplt-e2term-alpha-849df794c-j6nhf 1/1 Running 3 2d2h
    deployment-ricplt-jaegeradapter-76ddbf9c9-r464v 1/1 Running 4 7d6h
    deployment-ricplt-o1mediator-f7dd5fcc8-dt9kq 1/1 Running 4 7d6h
    deployment-ricplt-rtmgr-7455599d58-np94f 1/1 Running 7 7d6h
    deployment-ricplt-submgr-6cd6775cd6-x8z74 1/1 Running 4 7d6h
    deployment-ricplt-vespamgr-757b6cc5dc-4vtzn 1/1 Running 4 7d6h
    deployment-ricplt-xapp-onboarder-5958856fc8-p8bjl 2/2 Running 8 7d6h
    r4-infrastructure-kong-7995f4679b-n65qm 2/2 Running 11 7d5h
    r4-infrastructure-prometheus-alertmanager-5798b78f48-xks4r 2/2 Running 8 7d6h
    r4-infrastructure-prometheus-server-c8ddcfdf5-55tf8 1/1 Running 4 7d6h
    ricplt-influxdb-meta-0 0/1 Pending 0 7d6h
    statefulset-ricplt-dbaas-server-0 1/1 Running 4 7d6h

    1. 柯俊先


      Hello, I also try to deploy the near-RT RIC with the Cherry version and face the same issue.

      Finally, I find the instructions in the document, which provided a solution to this issue:

      https://github.com/o-ran-sc/it-dep/blob/master/docs/ric/installation-k8s1node.rst

      Please find the corresponding steps in the section "Onetime setup for Influxdb".

  52. Hello!

    I'm trying to run the near-RT RIC on ARM-based instance. All the images were built from source on arm machine.
    After modifications to the helm charts, I was able to deploy the components to a local cluster:

    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5644d7b6d9-46w7s 1/1 Running 4 2d21h
    kube-system coredns-5644d7b6d9-g4qmk 1/1 Running 4 2d21h
    kube-system etcd-ip-10-0-0-113 1/1 Running 4 2d21h
    kube-system kube-apiserver-ip-10-0-0-113 1/1 Running 4 2d21h
    kube-system kube-controller-manager-ip-10-0-0-113 1/1 Running 5 2d21h
    kube-system kube-flannel-ds-4v86b 1/1 Running 4 2d21h
    kube-system kube-proxy-5mlzb 1/1 Running 4 2d21h
    kube-system kube-scheduler-ip-10-0-0-113 1/1 Running 4 2d21h
    kube-system tiller-deploy-7d5568dd96-zshl6 1/1 Running 4 2d21h
    ricinfra deployment-tiller-ricxapp-6b6b4c787-qnvbd 1/1 Running 0 20h
    ricinfra tiller-secret-generator-ls4lp 0/1 Completed 0 20h
    ricplt deployment-ricplt-a1mediator-6555b9886b-r2mll 1/1 Running 0 20h
    ricplt deployment-ricplt-alarmmanager-59cc7d4cb-kmkp4 1/1 Running 0 20h
    ricplt deployment-ricplt-appmgr-5fbcf5c7f7-ndzwm 1/1 Running 0 20h
    ricplt deployment-ricplt-e2mgr-5d9b49f784-qj2p9 1/1 Running 0 20h
    ricplt deployment-ricplt-e2term-alpha-645778bdc8-cj79n 0/1 CrashLoopBackOff 240 20h
    ricplt deployment-ricplt-jaegeradapter-c68c6b554-mlf7j 1/1 Running 0 20h
    ricplt deployment-ricplt-o1mediator-86587dd94f-7mjr8 1/1 Running 0 20h
    ricplt deployment-ricplt-rtmgr-67c9bdccf6-9d7v5 1/1 Running 3 20h
    ricplt deployment-ricplt-submgr-6ffd499fd5-wktgv 1/1 Running 0 20h
    ricplt deployment-ricplt-vespamgr-68b68b78db-9tpm8 1/1 Running 0 20h
    ricplt deployment-ricplt-xapp-onboarder-7c5f47bc8-fl9sf 2/2 Running 0 20h
    ricplt r4-infrastructure-kong-84cd44455-l9tmc 2/2 Running 2 20h
    ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-2th9v 2/2 Running 0 20h
    ricplt r4-infrastructure-prometheus-server-5fd7695-2vjrk 1/1 Running 0 20h
    ricplt ricplt-influxdb-meta-0 0/1 Pending 0 20h
    ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 20h


    I'm struggling to get e2term pod working with the following error (from kubectl logs):

    environments service name is SERVICE_RICPLT_E2TERM_RMR_ALPHA_SERVICE_HOST
    service ip is 10.x.x.248
    nano=38000
    loglevel=error
    volume=log
    #The key name of the environment holds the local ip address
    #ip address of the E2T in the RMR
    local-ip=10.x.x.248
    #prometheus mode can be pull or push
    prometheusMode=pull
    #timeout can be from 5 seconds to 300 seconds default is 10
    prometheusPushTimeOut=10
    prometheusPushAddr=127.0.0.1:7676
    prometheusPort=8088
    #trace is start, stop
    trace=stop
    external-fqdn=10.102.98.248
    #put pointer to the key that point to pod name
    pod_name=E2TERM_POD_NAME
    sctp-port=36422
    ./startup.sh: line 16: 24 Segmentation fault (core dumped) ./e2 -p config -f config.conf

    The 10.x.x.248 ip is the same for rservice-ricplt-e2term-rmr-alpha, but the pod keeps crushing. 

    Have anyone faced a similar issue?

    Thanks.

    1. What's your runtime environment? Are you running it on VMs? If yes which platform?

      If you scroll up in the comments another user had a similar issue with coredump on E2, specifically when using Proxmox Virtualization was to set the cpu type to 'host' this might be a similar issue. 

      Check first that, will take it from there for further troubleshooting

      1. It's an AWS arm-based instance with ubuntu 18.04

        Helm 2.17
        Kube 1.16
        Docker 20.10

        1. A quick observation, do you have SCTP kernel module loaded?

          1. Do you mean libstcp-dev? 
            If so - it wasn't installed., just added it.

            1. Not just just that, I mean the actual kernel module:

              lsmod | grep sctp
              xt_sctp 16384 5
              sctp 405504 304
              libcrc32c 16384 7 nf_conntrack,nf_nat,openvswitch,nf_tables,xfs,libceph,sctp

              1. Enabled the modules and redeployed the platform. Didn't help.

                lsmod | grep sctp
                sctp 385024 40
                libcrc32c 16384 6 nf_conntrack,nf_nat,btrfs,raid456,ip_vs,sctp

  53. Hi All.

    I entered the following:


    # kubectl logs --tail=10 -n ricxapp ricxapp-hwxapp-6d49b695fb-pntbd

    1632279673 8/RMR [INFO] sends: ts=1632279673 src=service-ricxapp-hwxapp-rmr.ricxapp:4560 target=service-ricplt-a1mediator-rmr.ricplt:4562 open=1 succ=1 fail=0 (hard=0 soft=0)
    1632279673 8/RMR [INFO] sends: ts=1632279673 src=service-ricxapp-hwxapp-rmr.ricxapp:4560 target=service-ricplt-submgr-rmr.ricplt:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1632279673 8/RMR [INFO] sends: ts=1632279673 src=service-ricxapp-hwxapp-rmr.ricxapp:4560 target=service-ricplt-e2term-rmr-alpha.ricplt:38000 open=0 succ=0 fail=0 (hard=0 soft=0)
    1632279673 8/RMR [INFO] sends: ts=1632279673 src=service-ricxapp-hwxapp-rmr.ricxapp:4560 target=service-ricplt-e2mgr-rmr.ricplt:3801 open=0 succ=0 fail=0 (hard=0 soft=0)


    What does this mean → open=1 succ=1 fail=0 (hard=0 soft=0)

    I searched on this site but couldn't find the answer.


    Thank you!

Write a comment…