Please check from Introduction and guides#Installingthenear-RTRIC → Installing the near-RT RIC

# Kubectl commads:

$ kubectl get pods -n nampespace - gets a list of Pods running

$ kubectl get logs -n namespace name_of_running_pod

kubectl get pods -n ricxapp

#  View logs...

$ kubectl logs -n ricxapp <name of POD retrieved from statement above>

  • No labels

338 Comments

  1. Thanks for the guidelines. It works well.

  2. Hi experts,

    after deployment, i have only 15 pods in ns ricplt, is it normal?

    The "Step 4" says: "kubectl get pods -n ricplt   # There should be ~16 pods running in the ricplt namespace.".

    (18:11 dabs@ricpltbronze bin) > sudo kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5644d7b6d9-clvtf 1/1 Running 5 32h
    kube-system coredns-5644d7b6d9-jwxnm 1/1 Running 5 32h
    kube-system etcd-ricpltbronze 1/1 Running 11 32h
    kube-system kube-apiserver-ricpltbronze 1/1 Running 28 32h
    kube-system kube-controller-manager-ricpltbronze 1/1 Running 9 32h
    kube-system kube-flannel-ds-amd64-mrwn2 1/1 Running 16 32h
    kube-system kube-proxy-zrtl8 1/1 Running 6 32h
    kube-system kube-scheduler-ricpltbronze 1/1 Running 8 32h
    kube-system tiller-deploy-68bf6dff8f-wbmwl 1/1 Running 4 32h
    ricinfra deployment-tiller-ricxapp-d4f98ff65-6h4n4 1/1 Running 0 3h13m
    ricinfra tiller-secret-generator-tgkzf 0/1 Completed 0 132m
    ricinfra tiller-secret-generator-zcx72 0/1 Error 0 3h13m
    ricplt deployment-ricplt-a1mediator-66fcf76c66-h6rp2 1/1 Running 0 40m
    ricplt deployment-ricplt-alarmadapter-64d559f769-glb5z 1/1 Running 0 30m
    ricplt deployment-ricplt-appmgr-6fd6664755-2mxjb 1/1 Running 0 42m
    ricplt deployment-ricplt-e2mgr-8479fb5ff8-9zqbp 1/1 Running 0 41m
    ricplt deployment-ricplt-e2term-alpha-bcb457df4-4dz62 1/1 Running 0 40m
    ricplt deployment-ricplt-jaegeradapter-84558d855b-tmqqb 1/1 Running 0 39m
    ricplt deployment-ricplt-o1mediator-d8b9fcdf-f4sgm 1/1 Running 0 34m
    ricplt deployment-ricplt-rtmgr-9d4847788-kf6r4 1/1 Running 10 41m
    ricplt deployment-ricplt-submgr-65dc9f4995-gt5kb 1/1 Running 0 40m
    ricplt deployment-ricplt-vespamgr-7458d9b5d-klh9l 1/1 Running 0 39m
    ricplt deployment-ricplt-xapp-onboarder-546b86b5c4-xkcpt 2/2 Running 0 42m
    ricplt r4-infrastructure-kong-6c7f6db759-7xjqm 2/2 Running 21 3h13m
    ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-jfkdg 2/2 Running 2 3h13m
    ricplt r4-infrastructure-prometheus-server-5fd7695-pprg2 1/1 Running 2 3h13m
    ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 43m

    1. Anonymous

      After deployment I'm also getting only 15 pods running. Is it normal or do I need to worry about that?

      1. it's normal, and you are good to go. just try policy management.

  3. Hi,

    Even though deployment of RIC plt did not give any errors, most of the RIC plt and infra pods remain in error state. 

    kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5644d7b6d9-dz774 1/1 Running 1 10h
    kube-system coredns-5644d7b6d9-fp586 1/1 Running 1 10h
    kube-system etcd-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-apiserver-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-controller-manager-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-flannel-ds-amd64-b4l97 1/1 Running 1 10h
    kube-system kube-proxy-fxfrk 1/1 Running 1 10h
    kube-system kube-scheduler-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system tiller-deploy-68bf6dff8f-jvtk7 1/1 Running 1 10h
    ricinfra deployment-tiller-ricxapp-d4f98ff65-hcnqf 0/1 ContainerCreating 0 9h
    ricinfra tiller-secret-generator-d7kmk 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-a1mediator-66fcf76c66-l2z7m 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-alarmadapter-64d559f769-7d8fq 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-appmgr-6fd6664755-7lp8q 0/1 Init:ImagePullBackOff 0 9h
    ricplt deployment-ricplt-e2mgr-8479fb5ff8-ggnx8 0/1 ErrImagePull 0 9h
    ricplt deployment-ricplt-e2term-alpha-bcb457df4-dkdbc 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-jaegeradapter-84558d855b-bpzcv 1/1 Running 0 9h
    ricplt deployment-ricplt-o1mediator-d8b9fcdf-5ptcs 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-rtmgr-9d4847788-rvnrx 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-submgr-65dc9f4995-cbhvc 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-vespamgr-7458d9b5d-bkzpg 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-xapp-onboarder-546b86b5c4-g2dnt 1/2 ImagePullBackOff 0 9h
    ricplt r4-infrastructure-kong-6c7f6db759-4czbj 2/2 Running 2 9h
    ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-gwxzz 2/2 Running 0 9h
    ricplt r4-infrastructure-prometheus-server-5fd7695-phcqs 1/1 Running 0 9h
    ricplt statefulset-ricplt-dbaas-server-0 0/1 ImagePullBackOff 0 9h


    any clue?

    1. Anonymous

      The ImagePullBackOff and ErrrImagePull errors are all for container images built from O-RAN SC code.  It appears that there is a problem for the docker engine in your installation fetching images from the O-RAN SC docker registry.  Oftentimes this is due to local firewall blocking such connections.

      You may want to try:  docker pull nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-a1:2.1.9 to see if your docker engine can retrieve this image. 

      1. Thanks for the information.

        I think you are right, even docker pull nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-a1:2.1.9 is giving an error of connection refused..

        although I am checking for the corporate firewall to disable it, the strange thing is docker pull hello-world and other such pulls are working fine.

        1. user-d3360

          "Connection refused" error does suggest network connection problem.

          These O-RAN SC docker registries use port 10001 ~ 10004, in particular 10002 for all released docker images.  They are not the default docker registry of 5000.  It is possible that your local firewall has a rule allowing outgoing connections into port 5000, but not these ports.

          1. Anonymous

            Thanks for the explanation. you are right these ports were blocked on firewall, working fine now.
            currently i am stuck with opening o1 dashboard, trying it on smo machine.

          2. thanks for the explanation. you are right these particular ports were getting blocked by firewall, got it working now.

            currently i am stuck in opening o1 dashboard, trying it on smo machine.

            1. Hi Team,


              Can you please share the link so that i am able to download the docker image manually and then load the same. I am also getting connection refused but not able to solve this thing. Meanwhile plz share the link so that it will help for me.


              Br

              Deepak

  4. Anonymous

    Hi all,

    after executing the below POST command. Iam not getting any response.

    # Start on-boarding process...

    curl --location --request POST "http://$(hostname):32080/onboard/api/v1/onboard/download"  --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"


    any clues/suggestions?

    1. it's probably caused by port 32080 has already been occupied by kube-proxy.

      you can either try using direct ip address, or using port-forwarding as a workaround. pls refer to my post: https://blog.csdn.net/jeffyko/article/details/107426626

      1. Anonymous

        Thanks a lot, it resolved the issue and we are able to proceed further,

        After executing the below command we are seeing the below logs. Is this the correct behavior?? In case this is the error, how to resolve this??

        $ kubectl logs -n ricxapp <name of POD retrieved from statement above>

        Error from server (BadRequest): container "hwxapp" in pod "ricxapp-hwxapp-684d8d675b-xz99d" is waiting to start: ContainerCreating

        1. make sure hwxapp Pod is running. It needs pull the 'hwxapp' docker image, which may take some time.

          kubectl get pods -n ricxapp | grep hwxapp

      2. Anonymous

        Hi Zhengwei, I have tried the solution in your blog, but I still receive the same error: AttributeError:  'ConnectionError' object has no attribute 'message'. Do you have any clues? Thanks.

      3. EDIT: Nevermind, silly me.  For anyone wondering, the port forwarding command needs to keep running for the forwarding to stay active.  So i just run it in a Screen tab keep it running there, and run the curl commands in a different tab

        Hi, I've tried this workaround but the port forwarding command just hangs and never completes.  Anyone experiencing the same issue? Kube cluster and pods seem healthy:

        root@ubuntu1804:~# kubectl -v=4 port-forward r4-infrastructure-kong-6c7f6db759-kkjtt 32088:32080 -n ricplt

        Forwarding from 127.0.0.1:32088 -> 32080

        (hangs after the previous message)



        1. same troble occured.

          Do someone know how to fix it?

      4. hi,

        i'm able to invoke the API calls into the xApp On-boarder,this works but subsequent deploy always fails saying upstream server is timing out 

        curl --location --request POST "http://$(hostname):32080/onboard/api/v1/onboard/download" --header 'Content-Type: application/json' --data-binary "@./onboard.ts.url"


        curl --location --request POST "http://$(hostname):32080/appmgr/ric/v1/xapps" --header 'Content-Type: app
        lication/json' --data-raw '{"xappName": "trafficxapp"}'
        The upstream server is timing out

        Any suggestions for this ?

        1. I'm setting up the RIC (bronze release) for the first time and having the same problem 'The upstream server is timing out'.

          I have tried using the Kong pod's explicit IP address but get the same behavior. Any ideas? 

          Thanks

          *****

          Update...

          1. From the app manager's logs `kubectl logs -f -n ricplt deployment-ricplt-appmgr-6fd6664755-nqbvh` it was apparent that the Helm installation was referring to an unsupported repo - "Failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml". 
          2. Attaching a shell to the pod `kubectl exec -n ricplt --stdin --tty deployment-ricplt-appmgr-6fd6664755-2kmlc -- /bin/bash`, allowed updating the stable repo as in this SO post.
          3. App manager still times out on the API request - "msg":"Command failed: exit status 1 - Error: context deadline exceeded\n, retrying"}"



      5. After applying this trick, I can send curl JSON request to KONG, but get weird message: {"message":"no Route matched with those values"}

        # curl --location --request POST "http://${KONG_PROXY}:32080/onboard/api/v1/onboard/download" --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"
        {"message":"no Route matched with those values"}

        More detail with -v option:

        Note: Unnecessary use of -X or --request, POST is already inferred.
        * Trying 10.96.6.250...
        * TCP_NODELAY set
        * Connected to 10.96.6.250 (10.96.6.250) port 32080 (#0)
        > POST /onboard/api/v1/onboard/download HTTP/1.1
        > Host: 10.96.6.250:32080
        > User-Agent: curl/7.58.0
        > Accept: */*
        > Content-Type: application/json
        > Content-Length: 129
        >
        * upload completely sent off: 129 out of 129 bytes
        < HTTP/1.1 404 Not Found
        < Date: Mon, 04 Apr 2022 07:12:49 GMT
        < Content-Type: application/json; charset=utf-8
        < Connection: keep-alive
        < Content-Length: 48
        < X-Kong-Response-Latency: 1
        < Server: kong/1.4.3
        <
        * Connection #0 to host 10.96.6.250 left intact
        {"message":"no Route matched with those values"}


        Anyone know how to fix this one? 

        1. For the latest version, near-RT RIC removes the xApp onboarder and uses dms_cli for onboarding the xApp.

          You can find some information in the installation guide:

          https://docs.o-ran-sc.org/projects/o-ran-sc-ric-plt-ric-dep/en/latest/installation-guides.html

          However, the dms_cli needs helm3 instead of helm2.

          You can also find information in the discussion at the end of this page:

           https://wiki.o-ran-sc.org/display/GS/Hello+World+xApp+Use+Case+Flows

  5. Anonymous

    In my case, the script only works with helm 2.x. how about others? 

  6. THANK You! This is great!

  7. Anonymous

    Hi all,

    I use the source code to build the hw_unit_test container and I execute the hw_unit_test, it stops and returns "SHAREDDATALAYER_ABORT, file src/redis/asynchirediscommanddispatcher.cpp, line 206: Required Redis module extension commands not available.
    Aborted (core dumped)".

    Thus, I can't tset my hwxapp.

    any clues/suggestions? Thanks!

      

  8. Anonymous

    Hi all,

    There is a pod crash every time. When I run the following command, I got one crashed pod. Did you meet the same case? Thanks so much for giving me some suggestions.

    root@chuanhao-HP-EliteDesk-800-G4-SFF:~/dep/bin# kubectl get pods -n ricplt

    NAME READY STATUS RESTARTS AGE

    deployment-ricplt-a1mediator-66fcf76c66-f6kbh 1/1 Running 1 2m16s

    deployment-ricplt-alarmadapter-64d559f769-twfk7 1/1 Running 0 46s

    deployment-ricplt-appmgr-6fd6664755-7rs4g 1/1 Running 0 3m49s

    deployment-ricplt-e2mgr-8479fb5ff8-j9nzf 1/1 Running 0 3m

    deployment-ricplt-e2term-alpha-bcb457df4-r22nb 1/1 Running 0 2m39s

    deployment-ricplt-jaegeradapter-84558d855b-xfgd5 1/1 Running 0 78s

    deployment-ricplt-o1mediator-d8b9fcdf-tpz7v 1/1 Running 0 64s

    deployment-ricplt-rtmgr-9d4847788-scrxf 1/1 Running 1 3m26s

    deployment-ricplt-submgr-65dc9f4995-knzjd 1/1 Running 0 113s

    deployment-ricplt-vespamgr-7458d9b5d-mdmjx 1/1 Running 0 96s

    deployment-ricplt-xapp-onboarder-546b86b5c4-z2qd6 2/2 Running 0 4m16s

    r4-infrastructure-kong-6c7f6db759-44wdx 1/2 CrashLoopBackOff 6 4m52s

    r4-infrastructure-prometheus-alertmanager-75dff54776-qlp4g 2/2 Running 0 4m52s

    r4-infrastructure-prometheus-server-5fd7695-lr6z7 1/1 Running 0 4m52s

    statefulset-ricplt-dbaas-server-0 1/1 Running 0 4m33s


  9. Hi All,

    I am trying to deploy RIC and at step 4 while run command "./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml". I am getting this below error:

    root@ubuntu:~/dep/bin# ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml
    Error: unknown command "home" for "helm"
    Run 'helm --help' for usage.
    Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Successfully packaged chart and saved it to: /tmp/aux-common-3.0.0.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Successfully packaged chart and saved it to: /tmp/nonrtric-common-2.0.0.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Error: open /repository/local/index.yaml822716896: no such file or directory
    Error: no repositories configured
    Error: looks like "http://127.0.0.1:8879/charts" is not a valid chart repository or cannot be reached: Get "http://127.0.0.1:8879/charts/index.yaml": dial tcp 127.0.0.1:8879: connect: connection refused.

    I am unable to resolve this issue. Could any one please help to resolve this.

    Thanks,

    1. I had a different error but maybe it helps, I updated in the example_recipe.yaml file the helm version to 2.17.0.

      I had this error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached:

    2. Try this . it might help
      helm
      init --client-only --skip-refresh helm repo rm stable helm repo add stable https://charts.helm.sh/
      1. Anonymous

        Problem still exists

      2. Anonymous

        It is working, thanks!

      3. Anonymous

        Correct repo add link: "

        helm repo add stable https://charts.helm.sh/stable
    3. Anonymous

      See wether you are using helm 3.xxx

      For the ORan stuff to work you need helm 2.xxx 

      helm 3.xx commands are different even init does not exist enymore

  10. Hey all,

    I am trying to deploy the RIC in it's Cherry-Release Version.

    But i am facing some issues:

    a) 

    The r4-alarmadater isnt getting released

    End of deploy-ric-platform script
    Error: release r4-alarmadapter failed: configmaps "configmap-ricplt-alarmadapter-appconfig" already exists
    root@max-near-rt-ric-cherry:/home/ubuntu/dep/bin# helm ls --all r4-alarmadapter
    + helm ls --all r4-alarmadapter
    NAME   REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
    r4-alarmadapter  1 Mon Jan 11 09:14:08 2021 FAILED alarmadapter-3.0.0 1.0 ricplt

    b)

    The pods aren't getting to their Running state. This might be connected to issue a) but I am not sure.

    kubectl get pods --all-namspaces:

    NAMESPACE     NAME                                                         READY   STATUS              RESTARTS   AGE
    kube-system   coredns-5644d7b6d9-794qq                                     1/1     Running             1          29m
    kube-system   coredns-5644d7b6d9-ph6tt                                     1/1     Running             1          29m
    kube-system   etcd-max-near-rt-ric-cherry                                  1/1     Running             1          28m
    kube-system   kube-apiserver-max-near-rt-ric-cherry                        1/1     Running             1          28m
    kube-system   kube-controller-manager-max-near-rt-ric-cherry               1/1     Running             1          28m
    kube-system   kube-flannel-ds-ljz7w                                        1/1     Running             1          29m
    kube-system   kube-proxy-cdkf4                                             1/1     Running             1          29m
    kube-system   kube-scheduler-max-near-rt-ric-cherry                        1/1     Running             1          28m
    kube-system   tiller-deploy-68bf6dff8f-xfkwd                               1/1     Running             1          27m
    ricinfra      deployment-tiller-ricxapp-d4f98ff65-wwbhx                    0/1     ContainerCreating   0          25m
    ricinfra      tiller-secret-generator-2nsb2                                0/1     ContainerCreating   0          25m
    ricplt        deployment-ricplt-a1mediator-66fcf76c66-njphv                0/1     ContainerCreating   0          23m
    ricplt        deployment-ricplt-appmgr-6fd6664755-r577d                    0/1     Init:0/1            0          24m
    ricplt        deployment-ricplt-e2mgr-6dfb6c4988-tb26k                     0/1     ContainerCreating   0          24m
    ricplt        deployment-ricplt-e2term-alpha-64965c46c6-5d59x              0/1     ContainerCreating   0          23m
    ricplt        deployment-ricplt-jaegeradapter-76ddbf9c9-fw4sh              0/1     ContainerCreating   0          22m
    ricplt        deployment-ricplt-o1mediator-d8b9fcdf-86qgg                  0/1     ContainerCreating   0          22m
    ricplt        deployment-ricplt-rtmgr-6d559897d8-jvbsb                     0/1     ContainerCreating   0          24m
    ricplt        deployment-ricplt-submgr-65bcd95469-nc8pq                    0/1     ContainerCreating   0          23m
    ricplt        deployment-ricplt-vespamgr-7458d9b5d-xlt8m                   0/1     ContainerCreating   0          22m
    ricplt        deployment-ricplt-xapp-onboarder-5958856fc8-kw9jx            0/2     ContainerCreating   0          25m
    ricplt        r4-infrastructure-kong-6c7f6db759-q5psw                      0/2     ContainerCreating   0          25m
    ricplt        r4-infrastructure-prometheus-alertmanager-75dff54776-mb8hn   0/2     ContainerCreating   0          25m
    ricplt        r4-infrastructure-prometheus-server-5fd7695-bvk74            1/1     Running             0          25m
    ricplt        statefulset-ricplt-dbaas-server-0                            0/1     ContainerCreating   0          25m

    Thanks in advance!
    1. Hello, Max

      I had same issues here, i solve it after restarting the VM, it seems i aint uncomment the RIC tested on infra.c on dep/tools/k8s/etc

      and the onap pods still running.

      Thanks,

  11. Hi All,

    I am trying to install rmr_nng library as a requirement of ric-app-kpimon  xapp.

    https://github.com/o-ran-sc/ric-plt-lib-rmr

    https://github.com/nanomsg/nng

    but getting below error(snippet):

    -- Installing: /root/ric-plt-lib-rmr/.build/lib/cmake/nng/nng-config-version.cmake
    -- Installing: /root/ric-plt-lib-rmr/.build/bin/nngcat
    -- Set runtime path of "/root/ric-plt-lib-rmr/.build/bin/nngcat" to "/root/ric-plt-lib-rmr/.build/lib"
    [ 40%] No test step for 'ext_nng'
    [ 41%] Completed 'ext_nng'
    [ 41%] Built target ext_nng
    Scanning dependencies of target nng_objects
    [ 43%] Building C object src/rmr/nng/CMakeFiles/nng_objects.dir/src/rmr_nng.c.o
    In file included from /root/ric-plt-lib-rmr/src/rmr/nng/src/rmr_nng.c:70:0:
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c: In function ‘roll_tables’:
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:406:28: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtgate’; did you mean ‘rtable’?
    pthread_mutex_lock( ctx->rtgate ); // must hold lock to move to active
    ^~~~~~
    rtable
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:409:30: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtgate’; did you mean ‘rtable’?
    pthread_mutex_unlock( ctx->rtgate );
    ^~~~~~
    rtable
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c: In function ‘parse_rt_rec’:
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:858:12: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtable_ready’; did you mean ‘rmr_ready’?
    ctx->rtable_ready = 1; // route based sends can now happen
    ^~~~~~~~~~~~
    rmr_ready

    ...

    ...

    I am unable to resolve this issue. Could any one please help to resolve this.

    Thanks,


  12. Hi All, 

    I am trying to get this up and running (first time ever).  

    For anyone interested, just wanted to highlight that if Step 4 fails with the following:

    root@ubuntu1804:~/dep/bin# ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml
    Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz
    Successfully packaged chart and saved it to: /tmp/aux-common-3.0.0.tgz
    Successfully packaged chart and saved it to: /tmp/nonrtric-common-2.0.0.tgz
    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
    ****************************************************************************************************************
                                                         ERROR                                                      
    ****************************************************************************************************************
    Can't locate the ric-common helm package in the local repo. Please make sure that it is properly installed.
    ****************************************************************************************************************

    You just need to initialize Helm repositories with the following (disregard the suggestion in the above error output as it's deprecated from Nov 2020):

    helm init --stable-repo-url=https://charts.helm.sh/stable --client-only 

    and then run Step 4.



    1. This works to me. Thanks

  13. Anonymous

    Hi,

    We are trying to compile the pods on Arm and test them on Arm-based platform. We are failing for xapp-onboarder, ric-plt-e, ric-plt-alarmadapter. Has anyone tried it running the o-ran ric on Arm? Could someone point us to the recipe to build the images from the source? 

  14. Hi,

    I'm in the process of porting the near realtime RIC to ARM. The process has been straight forward up until now. Does anyone have instructions on how all the individual components are built or where they can be sourced outside of the prebuilt images. I have been able to build many items needed to complete the built but still having issues.

    images built:
    ric-plt-rtmgr:0.6.3
    it-dep-init:0.0.1
    ric-plt-rtmgr:0.6.3
    ric-plt-appmgr:0.4.3
    bldr-alpine3-go:2.0.0
    bldr-ubuntu18-c-go:1.9.0
    bldr-ubuntu18-c-go:9-u18.04
    ric-plt-a1:2.1.9
    ric-plt-dbaas:0.2.2
    ric-plt-e2mgr:5.0.8
    ric-plt-submgr:0.4.3
    ric-plt-vespamgr:0.4.0
    ric-plt-o1:0.4.4
    bldr-alpine3:12-a3.11
    bldr-alpine3-mdclog:0.0.4
    bldr-alpine3-rmr:4.0.5
    
    
    images needed:
    xapp-onboarder:1.0.7
    ric-plt-e2:5.0.8
    ric-plt-alarmadapter:0.4.5
    
    

    Please keep in mind I'm not sure if i built these all correctly since I have not found any instructions. The latest outputs of my effort can be seen here:

    NAME                                                         READY   STATUS                       RESTARTS   AGE
    deployment-ricplt-a1mediator-cb47dc85d-7cr9b                 0/1     CreateContainerConfigError   0          76s
    deployment-ricplt-e2term-alpha-8665fc56f6-m94ql              0/1     ImagePullBackOff             0          92s
    deployment-ricplt-jaegeradapter-76ddbf9c9-hg5pr              0/1     Error                        2          26s
    deployment-ricplt-o1mediator-86587dd94f-gbtvj                0/1     CreateContainerConfigError   0          9s
    deployment-ricplt-rtmgr-67c9bdccf6-g4nd6                     0/1     CreateContainerConfigError   0          63m
    deployment-ricplt-submgr-6ffd499fd5-6txqb                    0/1     CreateContainerConfigError   0          59s
    deployment-ricplt-vespamgr-68b68b78db-5m579                  1/1     Running                      0          43s
    deployment-ricplt-xapp-onboarder-579967799d-bp74x            0/2     ImagePullBackOff             17         63m
    r4-infrastructure-kong-6c7f6db759-4vlms                      0/2     CrashLoopBackOff             17         64m
    r4-infrastructure-prometheus-alertmanager-75dff54776-cq7fl   2/2     Running                      0          64m
    r4-infrastructure-prometheus-server-5fd7695-5jrwf            1/1     Running                      0          64m

    Any help would be great. My end goal is to push these changes upstream and to provide CI/CD for ARM images. 

    1. NAMESPACE     NAME                                                         READY   STATUS      RESTARTS   AGE
      kube-system   coredns-5644d7b6d9-6nwr8                                     1/1     Running     14         20d
      kube-system   coredns-5644d7b6d9-8lc5c                                     1/1     Running     14         20d
      kube-system   etcd-ip-10-0-0-199                                           1/1     Running     14         20d
      kube-system   kube-apiserver-ip-10-0-0-199                                 1/1     Running     14         20d
      kube-system   kube-controller-manager-ip-10-0-0-199                        1/1     Running     14         20d
      kube-system   kube-flannel-ds-4w4jh                                        1/1     Running     14         20d
      kube-system   kube-proxy-955gz                                             1/1     Running     14         20d
      kube-system   kube-scheduler-ip-10-0-0-199                                 1/1     Running     14         20d
      kube-system   tiller-deploy-7d5568dd96-ttbg5                               1/1     Running     1          20h
      ricinfra      deployment-tiller-ricxapp-6b6b4c787-w2nnw                    1/1     Running     0          2m40s
      ricinfra      tiller-secret-generator-c7z86                                0/1     Completed   0          2m40s
      ricplt        deployment-ricplt-e2term-alpha-8665fc56f6-q8cgq              0/1     Running     1          72s
      ricplt        deployment-ricplt-jaegeradapter-85cbdfbfbc-xlxt2             1/1     Running     0          12s
      ricplt        deployment-ricplt-rtmgr-67c9bdccf6-c9fck                     1/1     Running     0          101s
      ricplt        deployment-ricplt-submgr-6ffd499fd5-pmx4m                    1/1     Running     0          43s
      ricplt        deployment-ricplt-vespamgr-68b68b78db-4skbw                  1/1     Running     0          28s
      ricplt        deployment-ricplt-xapp-onboarder-57f78cfdf-xrtss             2/2     Running     0          2m10s
      ricplt        r4-infrastructure-kong-84cd44455-6pgxh                       2/2     Running     1          2m40s
      ricplt        r4-infrastructure-prometheus-alertmanager-75dff54776-fl4xh   2/2     Running     0          2m40s
      ricplt        r4-infrastructure-prometheus-server-5fd7695-jv8s4            1/1     Running     0          2m40s

      I have been able to get all services up except for e2term-alpha, I am getting an error:

      environments service name is SERVICE_RICPLT_E2TERM_RMR_ALPHA_SERVICE_HOST
      service ip is 10.X.X.95
      nano=38000
      loglevel=error
      volume=log
      #The key name of the environment holds the local ip address
      #ip address of the E2T in the RMR
      local-ip=10.X.X.95
      #prometheus mode can be pull or push
      prometheusMode=pull
      #timeout can be from 5 seconds to 300 seconds default is 10
      prometheusPushTimeOut=10
      prometheusPushAddr=127.0.0.1:7676
      prometheusPort=8088
      #trace is start, stop
      trace=stop
      external-fqdn=10.X.X.95
      #put pointer to the key that point to pod name
      pod_name=E2TERM_POD_NAME
      sctp-port=36422
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = nano=38000 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = nano value = 38000"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = loglevel=error "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = loglevel value = error"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = volume=log "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = volume value = log"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = local-ip=10.107.0.95 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = local-ip value = 10.107.0.95"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusMode=pull "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusMode value = pull"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPushTimeOut=10 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPushTimeOut value = 10"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPushAddr=127.0.0.1:7676 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPushAddr value = 127.0.0.1:7676"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPort=8088 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPort value = 8088"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = trace=stop "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = trace value = stop"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = external-fqdn=10.107.0.95 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = external-fqdn value = 10.107.0.95"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = pod_name=E2TERM_POD_NAME "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = pod_name value = E2TERM_POD_NAME"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = sctp-port=36422 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = sctp-port value = 36422"}
      1613156066 22/RMR [INFO] ric message routing library on SI95/g mv=3 flg=00 (84423e6 4.4.6 built: Feb 11 2021)
      e2: malloc.c:2401: sysmalloc: Assertion `(old_top == initial_top (av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && prev_inuse (old_top) && ((unsigned long) old_end & (pagesize - 1)) == 0)' failed.


      I'm not sure what to do next.

      1. Anonymous

        Did you get this resolved? I too have the same problem.

        By the way, I think "tiller-secret-generator-c7z86" is completed but not running. There seems to be no error with e2-term-alpha.

        1. I worked with Scott Daniels on the RMR library. Changes he made to a race condition fixed this. I'm using version 4.6.1, no additional fixes were needed.

  15. Hi all,

    I deployed the RIC and the SMO in two different VMs and I cannot manage to establish a connection between them to validate the A1 workflows. I suspect the reason is that the RIC VM is not exposing its services to the outside world. Could anyone help me address the issue? For example I would appreciate that if you have a setup like mine you could share the "example_recipe.yaml" you used to deploy the ric platform.

    I paste here the outcomes of the services running in ricplt namespace, where in all services it says EXTERNAL_IP = <none>. Please let me know if this is the expected behavior

    # kubectl get services -n ricplt
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    aux-entry ClusterIP 10.99.120.87 <none> 80/TCP,443/TCP 3d6h
    r4-infrastructure-kong-proxy NodePort 10.100.225.53 <none> 32080:32080/TCP,32443:32443/TCP 3d6h
    r4-infrastructure-prometheus-alertmanager ClusterIP 10.105.181.117 <none> 80/TCP 3d6h
    r4-infrastructure-prometheus-server ClusterIP 10.108.80.71 <none> 80/TCP 3d6h
    service-ricplt-a1mediator-http ClusterIP 10.111.176.180 <none> 10000/TCP 3d6h
    service-ricplt-a1mediator-rmr ClusterIP 10.105.1.73 <none> 4561/TCP,4562/TCP 3d6h
    service-ricplt-alarmadapter-http ClusterIP 10.104.93.39 <none> 8080/TCP 3d6h
    service-ricplt-alarmadapter-rmr ClusterIP 10.99.249.253 <none> 4560/TCP,4561/TCP 3d6h
    service-ricplt-appmgr-http ClusterIP 10.101.144.152 <none> 8080/TCP 3d6h
    service-ricplt-appmgr-rmr ClusterIP 10.110.71.188 <none> 4561/TCP,4560/TCP 3d6h
    service-ricplt-dbaas-tcp ClusterIP None <none> 6379/TCP 3d6h
    service-ricplt-e2mgr-http ClusterIP 10.103.232.14 <none> 3800/TCP 3d6h
    service-ricplt-e2mgr-rmr ClusterIP 10.105.41.119 <none> 4561/TCP,3801/TCP 3d6h
    service-ricplt-e2term-rmr-alpha ClusterIP 10.110.201.91 <none> 4561/TCP,38000/TCP 3d6h
    service-ricplt-e2term-sctp-alpha NodePort 10.106.243.219 <none> 36422:32222/SCTP 3d6h
    service-ricplt-jaegeradapter-agent ClusterIP 10.102.5.160 <none> 5775/UDP,6831/UDP,6832/UDP 3d6h
    service-ricplt-jaegeradapter-collector ClusterIP 10.99.78.42 <none> 14267/TCP,14268/TCP,9411/TCP 3d6h
    service-ricplt-jaegeradapter-query ClusterIP 10.100.166.185 <none> 16686/TCP 3d6h
    service-ricplt-o1mediator-http ClusterIP 10.101.145.78 <none> 9001/TCP,8080/TCP,3000/TCP 3d6h
    service-ricplt-o1mediator-tcp-netconf NodePort 10.110.230.66 <none> 830:30830/TCP 3d6h
    service-ricplt-rtmgr-http ClusterIP 10.104.100.69 <none> 3800/TCP 3d6h
    service-ricplt-rtmgr-rmr ClusterIP 10.98.40.180 <none> 4561/TCP,4560/TCP 3d6h
    service-ricplt-submgr-http ClusterIP None <none> 3800/TCP 3d6h
    service-ricplt-submgr-rmr ClusterIP None <none> 4560/TCP,4561/TCP 3d6h
    service-ricplt-vespamgr-http ClusterIP 10.102.12.213 <none> 8080/TCP,9095/TCP 3d6h
    service-ricplt-xapp-onboarder-http ClusterIP 10.107.167.33 <none> 8888/TCP,8080/TCP 3d6h


    This is the detail of the kong service in the RIC VM:

    # kubectl describe service r4-infrastructure-kong-proxy -n ricplt
    Name: r4-infrastructure-kong-proxy
    Namespace: ricplt
    Labels: app.kubernetes.io/instance=r4-infrastructure
    app.kubernetes.io/managed-by=Tiller
    app.kubernetes.io/name=kong
    app.kubernetes.io/version=1.4
    helm.sh/chart=kong-0.36.6
    Annotations: <none>
    Selector: app.kubernetes.io/component=app,app.kubernetes.io/instance=r4-infrastructure,app.kubernetes.io/name=kong
    Type: NodePort
    IP: 10.100.225.53
    Port: kong-proxy 32080/TCP
    TargetPort: 32080/TCP
    NodePort: kong-proxy 32080/TCP
    Endpoints: 10.244.0.26:32080
    Port: kong-proxy-tls 32443/TCP
    TargetPort: 32443/TCP
    NodePort: kong-proxy-tls 32443/TCP
    Endpoints: 10.244.0.26:32443
    Session Affinity: None
    External Traffic Policy: Cluster
    Events: <none>


    BR


    Daniel

  16. Deploy RIC using Recipe failed:

    When running './deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml' it failed with the following message:

    Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz

    Successfully packaged chart and saved it to: /tmp/aux-common-3.0.0.tgz

    Successfully packaged chart and saved it to: /tmp/nonrtric-common-2.0.0.tgz

    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).

    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)

    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).

    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)

    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).

    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)

    ****************************************************************************************************************

                                                         ERROR                                                      

    ****************************************************************************************************************

    Can't locate the ric-common helm package in the local repo. Please make sure that it is properly installed.

    ****************************************************************************************************************

    Running `helm init` as suggested doesn't help. Any idea?

  17. failures on two services: ricplt-e2mgr and ricplt-a1mediator.

    NAMESPACE     NAME                                                         READY   STATUS             RESTARTS   AGE
    kube-system   coredns-5644d7b6d9-6mk6z                                     1/1     Running            5          7d21h
    kube-system   coredns-5644d7b6d9-wxtqf                                     1/1     Running            5          7d21h
    kube-system   etcd-kubernetes-master                                       1/1     Running            5          7d21h
    kube-system   kube-apiserver-kubernetes-master                             1/1     Running            5          7d21h
    kube-system   kube-controller-manager-kubernetes-master                    1/1     Running            5          7d21h
    kube-system   kube-flannel-ds-gb67g                                        1/1     Running            5          7d21h
    kube-system   kube-proxy-hn2sq                                             1/1     Running            5          7d21h
    kube-system   kube-scheduler-kubernetes-master                             1/1     Running            5          7d21h
    kube-system   tiller-deploy-68bf6dff8f-4hlm9                               1/1     Running            9          7d21h
    ricinfra      deployment-tiller-ricxapp-6b6b4c787-ssvts                    1/1     Running            2          2d21h
    ricinfra      tiller-secret-generator-lgpxt                                0/1     Completed          0          2d21h
    ricplt        deployment-ricplt-a1mediator-cb47dc85d-nq4kc                 0/1     CrashLoopBackOff   21         66m
    ricplt        deployment-ricplt-appmgr-5fbcf5c7f7-dsf9q                    1/1     Running            1          2d1h
    ricplt        deployment-ricplt-e2mgr-7dbfbbb796-2nvgz                     0/1     CrashLoopBackOff   6          8m31s
    ricplt        deployment-ricplt-e2term-alpha-8665fc56f6-lcfwr              1/1     Running            4          2d21h
    ricplt        deployment-ricplt-jaegeradapter-85cbdfbfbc-ln7sk             1/1     Running            2          2d21h
    ricplt        deployment-ricplt-o1mediator-86587dd94f-hbs47                1/1     Running            1          2d1h
    ricplt        deployment-ricplt-rtmgr-67c9bdccf6-ckm9s                     1/1     Running            2          2d21h
    ricplt        deployment-ricplt-submgr-6ffd499fd5-z7x8l                    1/1     Running            2          2d21h
    ricplt        deployment-ricplt-vespamgr-68b68b78db-f6mw8                  1/1     Running            2          2d21h
    ricplt        deployment-ricplt-xapp-onboarder-57f78cfdf-4787b             2/2     Running            5          2d21h
    ricplt        r4-infrastructure-kong-84cd44455-rfnq4                       2/2     Running            7          2d21h
    ricplt        r4-infrastructure-prometheus-alertmanager-75dff54776-qzgnq   2/2     Running            4          2d21h
    ricplt        r4-infrastructure-prometheus-server-5fd7695-lsm8r            1/1     Running            2          2d21h

    deployment-ricplt-e2mgr errors:

    {"crit":"INFO","ts":1616189522367,"id":"E2Manager","msg":"#app.main - Configuration {logging.logLevel: info, http.port: 3800, rmr: { port: 3801, maxMsgSize: 65536}, routingManager.baseUrl: http://service-ricplt-rtmgr-http:3800/ric/v1/handles/, notificationResponseBuffer: 100, bigRedButtonTimeoutSec: 5, maxRnibConnectionAttempts: 3, rnibRetryIntervalMs: 10, keepAliveResponseTimeoutMs: 360000, keepAliveDelayMs: 120000, e2tInstanceDeletionTimeoutMs: 0, globalRicId: { ricId: AACCE, mcc: 310, mnc: 411}, rnibWriter: { stateChangeMessageChannel: RAN_CONNECTION_STATUS_CHANGE, ranManipulationChannel: RAN_MANIPULATION}","mdc":{"time":"2021-03-19 21:32:02.367"}}
    dial tcp 127.0.0.1:6379: connect: connection refused
    {"crit":"ERROR","ts":1616189522421,"id":"E2Manager","msg":"#app.main - Failed setting GENERAL key","mdc":{"time":"2021-03-19 21:32:02.421"}}
    

    deployment-ricplt-a1mediator errors:

    {"ts": 1616189735092, "crit": "DEBUG", "id": "a1.run", "mdc": {}, "msg": "A1Mediator starts"}
    {"ts": 1616189735092, "crit": "DEBUG", "id": "a1.run", "mdc": {}, "msg": "Starting RMR thread with RMR_RTG_SVC 4561, RMR_SEED_RT /opt/route/local.rt"}
    {"ts": 1616189735092, "crit": "DEBUG", "id": "a1.run", "mdc": {}, "msg": "RMR initialization must complete before webserver can start"}
    {"ts": 1616189735092, "crit": "DEBUG", "id": "a1.a1rmr", "mdc": {}, "msg": "Waiting for rmr to initialize.."}
    1616189735 1/RMR [INFO] ric message routing library on SI95 p=4562 mv=3 flg=02 (fd4477a 4.5.2 built: Mar 19 2021)
    {"ts": 1616189735593, "crit": "DEBUG", "id": "a1.a1rmr", "mdc": {}, "msg": "Work loop starting"}{"ts": 1616189735593, "crit": "DEBUG", "id": "a1.run", "mdc": {}, "msg": "RMR initialization complete"}{"ts": 1616189735594, "crit": "DEBUG", "id": "a1.run", "mdc": {}, "msg": "Starting gevent webserver on port 10000"}
    1616189736 1/RMR [INFO] sends: ts=1616189736 src=service-ricplt-a1mediator-rmr.ricplt:4562 target=service-ricxapp-admctrl-rmr.ricxapp:4563 open=0 succ=0 fail=0 (hard=0 soft=0)
    [2021-03-19 21:35:42,651] ERROR in app: Exception on /a1-p/healthcheck [GET]
    Traceback (most recent call last):
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/connection.py", line 1185, in get_connection
        connection = self._available_connections.pop()
    IndexError: pop from empty listDuring handling of the above exception, another exception occurred:Traceback (most recent call last):
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
        response = self.full_dispatch_request()
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
        rv = self.handle_user_exception(e)
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
        reraise(exc_type, exc_value, tb)
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
        raise value
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
        rv = self.dispatch_request()
      File "/home/a1user/.local/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
        return self.view_functions[rule.endpoint](**req.view_args)
      File "/home/a1user/.local/lib/python3.8/site-packages/connexion/decorators/decorator.py", line 48, in wrapper
        response = function(request)
      File "/home/a1user/.local/lib/python3.8/site-packages/connexion/decorators/uri_parsing.py", line 144, in wrapper
        response = function(request)
      File "/home/a1user/.local/lib/python3.8/site-packages/connexion/decorators/parameter.py", line 121, in wrapper
        return function(**kwargs)
      File "/home/a1user/.local/lib/python3.8/site-packages/a1/controller.py", line 80, in get_healthcheck
        if not data.SDL.healthcheck():
      File "/home/a1user/.local/lib/python3.8/site-packages/ricxappframe/xapp_sdl.py", line 655, in healthcheck
        return self._sdl.is_active()
      File "/home/a1user/.local/lib/python3.8/site-packages/ricsdl/syncstorage.py", line 138, in is_active
        return self.__dbbackend.is_connected()
      File "/home/a1user/.local/lib/python3.8/site-packages/ricsdl/backend/redis.py", line 181, in is_connected
        return self.__redis.ping()
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/client.py", line 1378, in ping
        return self.execute_command('PING')
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/client.py", line 898, in execute_command
        conn = self.connection or pool.get_connection(command_name, **options)
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/connection.py", line 1187, in get_connection
        connection = self.make_connection()
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/connection.py", line 1227, in make_connection
        return self.connection_class(**self.connection_kwargs)
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/connection.py", line 509, in __init__
        self.port = int(port)
    TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
    ::ffff:10.244.0.1 - - [2021-03-19 21:35:42] "GET /a1-p/healthcheck HTTP/1.1" 500 387 0.005291
    [2021-03-19 21:35:43,474] ERROR in app: Exception on /a1-p/healthcheck [GET]
    Traceback (most recent call last):
      File "/home/a1user/.local/lib/python3.8/site-packages/redis/connection.py", line 1185, in get_connection
        connection = self._available_connections.pop()
    IndexError: pop from empty list
    


    Looks like redis is missing, which service starts or runs this?

    1. dbaas is failing, this is the logs i see. I build the dbaas docker image as follows within the ric-plt-dbaas git repo:

      docker build --network=host -f docker/Dockerfile.redis -t ric-plt-dbaas:0.2.3 . 

      logs:

      284:C 23 Mar 2021 20:02:33.612 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
      284:C 23 Mar 2021 20:02:33.612 # Redis version=5.0.9, bits=64, commit=00000000, modified=0, pid=284, just started
      284:C 23 Mar 2021 20:02:33.612 # Configuration loaded
      284:M 23 Mar 2021 20:02:33.613 * Running mode=standalone, port=6379.
      284:M 23 Mar 2021 20:02:33.613 # Server initialized
      284:M 23 Mar 2021 20:02:33.613 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
      284:M 23 Mar 2021 20:02:33.613 * Module 'exstrings' loaded from /usr/local/libexec/redismodule/libredismodule.so
      284:M 23 Mar 2021 20:02:33.614 * Ready to accept connections
      284:signal-handler (1616529780) Received SIGTERM scheduling shutdown...
      284:M 23 Mar 2021 20:03:00.374 # User requested shutdown...
      284:M 23 Mar 2021 20:03:00.374 # Redis is now ready to exit, bye bye...


      I followed the ci-management yaml file for build arguments.

      Any ideas?

      1. ric-plt-dbaas:0.2.3 containz busybox with the command timeout. The -t option is not available in the arm version. Removal of that option in the probes fixes this issue.

  18. Anonymous

    Hello Everyone, 

    I am trying to install the cherry release on my VM. I am able to get all the containers running except for alarm manager.

    This is the error I get after running  ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yam Undo l

    Deleting outdated charts
    Error: render error in "alarmmanager/templates/serviceaccount.yaml": template: alarmmanager/templates/serviceaccount.yaml:20:11: executing "alarmmanager/templates/serviceaccount.yaml" at <include "common.serv...>: error calling include: template: no template "common.serviceaccountname.alarmmanager" associated with template "gotpl"


    When I run kubectl get pods --all-namespaces I get the following output


    NAMESPACE     NAME                                         READY       STATUS     RESTARTS     AGE
    kube-system     coredns-5644d7b6d9-4g4hg       1/1          Running       1               7h49m
    kube-system     coredns-5644d7b6d9-kcp7l         1/1          Running       1               7h49m
    kube-system     etcd-machine2                             1/1           Running       1              7h48m
    kube-system     kube-apiserver-machine2            1/1           Running       1              7h48m
    kube-system     kube-controller-manager-machine2 1/1     Running       1               7h49m
    kube-system     kube-flannel-ds-jtpk4                  1/1           Running       1               7h49m
    kube-system     kube-proxy-lk78t                         1/1           Running       1                7h49m
    kube-system     kube-scheduler-machine2           1/1           Running       1                7h48m
    kube-system     tiller-deploy-68bf6dff8f-n945d    1/1          Running        1                7h48m
    ricinfra               deployment-tiller-ricxapp-d4f98ff65-5bwxw     1/1     Running        0 20m
    ricinfra               tiller-secret-generator-nvpnk      0/1         Completed                       0 20m
    ricplt                 deployment-ricplt-a1mediator-66fcf76c66-wc5l8 1/1 Running          0 18m
    ricplt                 deployment-ricplt-appmgr-6fd6664755-m5tn9    1/1 Running          0 19m
    ricplt                 deployment-ricplt-e2mgr-6dfb6c4988-gwslx        1/1 Running          0 19m
    ricplt                 deployment-ricplt-e2term-alpha-7c8dc7bd94-5jwld 1/1 Running      0 18m
    ricplt                 deployment-ricplt-jaegeradapter-76ddbf9c9-lpg7b 1/1 Running       0 17m
    ricplt                deployment-ricplt-o1mediator-d8b9fcdf-sf5hf          1/1 Running       0 17m
    ricplt                deployment-ricplt-rtmgr-6d559897d8-txr4s               1/1 Running      2 19m
    ricplt               deployment-ricplt-submgr-5c5fb9f65f-x9z5k             1/1  Running      0 18m
    ricplt               deployment-ricplt-vespamgr-7458d9b5d-tp5xb        1/1  Running      0 18m
    ricplt               deployment-ricplt-xapp-onboarder-5958856fc8-46mfb 2/2 Running   0 19m
    ricplt               r4-infrastructure-kong-6c7f6db759-lgjql                    2/2   Running      1 20m
    ricplt               r4-infrastructure-prometheus-alertmanager-75dff54776-z9ztf 2/2  Running        0 20m
    ricplt               r4-infrastructure-prometheus-server-5fd7695-4cg2m 1/1 Running 0 20m

     There are only 14 ricplt pods running


    My guess is that since the tiller-secret-generator-nvpnk is not "Running", the alarmadapter container is not created.


    Any ideas? I am not sure how to proceed next

    1. Anonymous

      Hi, I am stuck at the same point. Were you able to fix this issue?

      1. Anonymous

        I also got same error and stuck at the same point.



        NAMESPACE NAME READY STATUS RESTARTS AGE

        kube-system coredns-5644d7b6d9-hp7pl 1/1 Running 1 88m

        kube-system coredns-5644d7b6d9-vbsf4 1/1 Running 1 88m

        kube-system etcd-rkota-nrtric 1/1 Running 1 87m

        kube-system kube-apiserver-rkota-nrtric 1/1 Running 1 87m

        kube-system kube-controller-manager-rkota-nrtric 1/1 Running 3 88m

        kube-system kube-flannel-ds-kj7fd 1/1 Running 1 88m

        kube-system kube-proxy-dwfmq 1/1 Running 1 88m

        kube-system kube-scheduler-rkota-nrtric 1/1 Running 2 87m

        kube-system tiller-deploy-68bf6dff8f-clzp6 1/1 Running 1 87m

        ricinfra deployment-tiller-ricxapp-d4f98ff65-7dwbf 1/1 Running 0 23m

        ricinfra tiller-secret-generator-dbrts 0/1 Completed 0 23m

        ricplt deployment-ricplt-a1mediator-66fcf76c66-7prjf 1/1 Running 0 20m

        ricplt deployment-ricplt-appmgr-6fd6664755-vfswc 1/1 Running 0 21m

        ricplt deployment-ricplt-e2mgr-6dfb6c4988-5j5mr 1/1 Running 0 20m

        ricplt deployment-ricplt-e2term-alpha-64965c46c6-lhgvp 1/1 Running 0 20m

        ricplt deployment-ricplt-jaegeradapter-76ddbf9c9-vzkbb 1/1 Running 0 19m

        ricplt deployment-ricplt-o1mediator-d8b9fcdf-8gcgp 1/1 Running 0 19m

        ricplt deployment-ricplt-rtmgr-6d559897d8-ts5xh 1/1 Running 6 20m

        ricplt deployment-ricplt-submgr-65bcd95469-v7bc2 1/1 Running 0 20m

        ricplt deployment-ricplt-vespamgr-7458d9b5d-dbqzv 1/1 Running 0 19m

        ricplt deployment-ricplt-xapp-onboarder-5958856fc8-jzp4g 2/2 Running 0 22m

        ricplt r4-infrastructure-kong-6c7f6db759-4hbgh 2/2 Running 1 23m

        ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-vn6d8 2/2 Running 0 23m

        ricplt r4-infrastructure-prometheus-server-5fd7695-gvtm5 1/1 Running 0 23m

        ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 22m


    2. Hi, I also got the problem.  I try to use "git clone http://gerrit.o-ran-sc.org/r/it/dep " to install  Near Realtime RIC. 

      Successful to finish it.   But I don't know how to resolve the problem when use "git clone http://gerrit.o-ran-sc.org/r/it/dep -b cherry" to install.

      Have you resolved the problem?

  19. Anonymous

    Hi all,

    My e2term repeatedly restarts. kubectl describe says the liveness probe has failed and I can verify that by logging into the pod. Below is the kubectl describe and kubectl logs output. I am using the latest source from http://gerrit.o-ran-sc.org/r/it/dep with the included recipe yaml.

    Any ideas where I can look to debug this?

    Thanks.


    root@e2term-alpha:/opt/e2# ./rmr_probe -h 0.0.0.0:38000 -t 10
    1617905013 88/RMR [INFO] ric message routing library on SI95/g mv=3 flg=01 (84423e6 4.4.6 built: Dec 4 2020)
    [FAIL] too few messages recevied during timeout window: wanted 1 got 0


    $ sudo kubectl --namespace=ricplt describe pod/deployment-ricplt-e2term-alpha-57ff7bf4f6-jttpk
    Name: deployment-ricplt-e2term-alpha-57ff7bf4f6-jttpk
    Namespace: ricplt
    Priority: 0
    Node: vm/192.168.122.166
    Start Time: Thu, 08 Apr 2021 14:00:26 -0400
    Labels: app=ricplt-e2term-alpha
    pod-template-hash=57ff7bf4f6
    release=r4-e2term
    Annotations: cni.projectcalico.org/podIP: 10.1.238.220/32
    cni.projectcalico.org/podIPs: 10.1.238.220/32
    Status: Running
    IP: 10.1.238.220
    IPs:
    IP: 10.1.238.220
    Controlled By: ReplicaSet/deployment-ricplt-e2term-alpha-57ff7bf4f6
    Containers:
    container-ricplt-e2term:
    Container ID: containerd://2bb8ecc931f0d972edea35e8a50af818144b937f13f74554917c0bb91ca7499a
    Image: nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2:5.4.8
    Image ID: nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2@sha256:ed8e1ce6214d039b24c3b7426756b6fc947e2c4e99d384c5de1778ae30188251
    Ports: 4561/TCP, 38000/TCP, 36422/SCTP, 8088/TCP
    Host Ports: 0/TCP, 0/TCP, 0/SCTP, 0/TCP
    State: Running
    Started: Thu, 08 Apr 2021 14:05:07 -0400
    Last State: Terminated
    Reason: Error
    Exit Code: 137
    Started: Thu, 08 Apr 2021 14:03:57 -0400
    Finished: Thu, 08 Apr 2021 14:05:07 -0400
    Ready: False
    Restart Count: 4
    Liveness: exec [/bin/sh -c /opt/e2/rmr_probe -h 0.0.0.0:38000] delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness: exec [/bin/sh -c /opt/e2/rmr_probe -h 0.0.0.0:38000] delay=15s timeout=1s period=10s #success=1 #failure=3
    Environment Variables from:
    configmap-ricplt-e2term-env-alpha ConfigMap Optional: false
    Environment: <none>
    Mounts:
    /data/outgoing/ from vol-shared (rw)
    /opt/e2/router.txt from local-router-file (rw,path="router.txt")
    /tmp/rmr_verbose from local-router-file (rw,path="rmr_verbose")
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-24dss (ro)
    Conditions:
    Type Status
    Initialized True
    Ready False
    ContainersReady False
    PodScheduled True
    Volumes:
    local-router-file:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: configmap-ricplt-e2term-router-configmap
    Optional: false
    vol-shared:
    Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName: pvc-ricplt-e2term-alpha
    ReadOnly: false
    default-token-24dss:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-24dss
    Optional: false
    QoS Class: BestEffort
    Node-Selectors: <none>
    Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
    node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal Scheduled 5m7s default-scheduler Successfully assigned ricplt/deployment-ricplt-e2term-alpha-57ff7bf4f6-jttpk to vm
    Normal Pulled 3m56s (x2 over 5m6s) kubelet Container image "nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2:5.4.8" already present on machine
    Normal Created 3m56s (x2 over 5m6s) kubelet Created container container-ricplt-e2term
    Normal Started 3m56s (x2 over 5m6s) kubelet Started container container-ricplt-e2term
    Normal Killing 3m16s (x2 over 4m26s) kubelet Container container-ricplt-e2term failed liveness probe, will be restarted
    Warning Unhealthy 2m55s (x11 over 4m45s) kubelet Readiness probe failed:
    Warning Unhealthy 6s (x13 over 4m46s) kubelet Liveness probe failed:



    $ sudo kubectl --namespace=ricplt logs pod/deployment-ricplt-e2term-alpha-57ff7bf4f6-jttpk
    environments service name is SERVICE_RICPLT_E2TERM_RMR_ALPHA_SERVICE_HOST
    service ip is 10.152.183.120
    nano=38000
    loglevel=error
    volume=log
    #The key name of the environment holds the local ip address
    #ip address of the E2T in the RMR
    local-ip=10.152.183.120
    #prometheus mode can be pull or push
    prometheusMode=pull
    #timeout can be from 5 seconds to 300 seconds default is 10
    prometheusPushTimeOut=10
    prometheusPushAddr=127.0.0.1:7676
    prometheusPort=8088
    #trace is start, stop
    trace=stop
    external-fqdn=10.152.183.120
    #put pointer to the key that point to pod name
    pod_name=E2TERM_POD_NAME
    sctp-port=36422
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = nano=38000 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = nano value = 38000"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = loglevel=error "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = loglevel value = error"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = volume=log "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = volume value = log"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = local-ip=10.152.183.120 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = local-ip value = 10.152.183.120"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusMode=pull "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusMode value = pull"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPushTimeOut=10 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPushTimeOut value = 10"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPushAddr=127.0.0.1:7676 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPushAddr value = 127.0.0.1:7676"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPort=8088 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPort value = 8088"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = trace=stop "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = trace value = stop"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = external-fqdn=10.152.183.120 "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = external-fqdn value = 10.152.183.120"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = pod_name=E2TERM_POD_NAME "}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = pod_name value = E2TERM_POD_NAME"}
    {"ts":1617905177450,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = sctp-port=36422 "}
    {"ts":1617905177451,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = sctp-port value = 36422"}
    1617905177 23/RMR [INFO] ric message routing library on SI95/g mv=3 flg=00 (84423e6 4.4.6 built: Dec 4 2020)
    1617905178 23/RMR [INFO] sends: ts=1617905178 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-admission-ctrl-xapp-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905178 23/RMR [INFO] sends: ts=1617905178 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-rsm-rmr.ricplt:4801 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905178 23/RMR [INFO] sends: ts=1617905178 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2mgr-rmr.ricplt:3801 open=1 succ=1 fail=0 (hard=0 soft=0)
    1617905178 23/RMR [INFO] sends: ts=1617905178 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricxapp-ueec-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905178 23/RMR [INFO] sends: ts=1617905178 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-a1mediator-rmr.ricplt:4562 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-admission-ctrl-xapp-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-rsm-rmr.ricplt:4801 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43246 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43606 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2mgr-rmr.ricplt:3801 open=1 succ=1 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricxapp-ueec-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-a1mediator-rmr.ricplt:4562 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905209 23/RMR [INFO] sends: ts=1617905209 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43705 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-admission-ctrl-xapp-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-rsm-rmr.ricplt:4801 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43246 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43606 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2mgr-rmr.ricplt:3801 open=1 succ=1 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricxapp-ueec-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-a1mediator-rmr.ricplt:4562 open=0 succ=0 fail=0 (hard=0 soft=0)
    1617905240 23/RMR [INFO] sends: ts=1617905240 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2term-rmr-alpha.ricplt:43705 open=0 succ=0 fail=0 (hard=0 soft=0)

      1. I had the same issue and I was able to mitigate it by changing the readiness/liveness probe timeouts and also the RMR probe timeout

        The readiness/liveness probe runs the rmr_probe command as you could see. First of all the ricxapp-ueec component is part of Nokia (somebody correct me if I am wrong) and is not available on the dawn. One thing I did is removing that routes from the mapping so the probe is not checking for it

        kubectl get cm configmap-ricplt-e2term-router-configmap -n ricplt -o yaml


        In that config map you also have another parameter rmr_verbose and you can activate more debug on the the RMR interface. 

        To troubleshoot e2term, you need to edit the deployment:

        oc edit ds deployment-ricplt-e2term-alpha -n ricplt


        Remove readiness and liveness probe once the pod is up and running:

        kubectl exec -ti deployment-ricplt-e2term-alpha-9c87d678-5s9bt -n ricplt /bin/bash
        kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
        root@e2term-alpha:/opt/e2# ls
        config dockerRouter.txt e2 log rmr_probe router.txt router.txt.stash router.txt.stash.inc startup.sh
        root@e2term-alpha:/opt/e2# ./rmr_probe -h 0.0.0.0:38000 -v
        [INFO] listen port: 43426; sending 1 messages
        1631885462176 5507/RMR [INFO] ric message routing library on SI95 p=43426 mv=3 flg=01 id=a (11838bc 4.7.4 built: Apr 27 2021)
        [INFO] RMR initialised
        [INFO] starting session with 0.0.0.0:38000, starting to send
        [INFO] connected to 0.0.0.0:38000, sending 1 pings
        [INFO] sending message: health check request prev=0 <eom>
        [FAIL] too few messages recevied during timeout window: wanted 1 got 0


        The command above failed but when I run it again with longer timeout:


        root@e2term-alpha:/opt/e2# ./rmr_probe -h 0.0.0.0:38000 -v
        [INFO] listen port: 43716; sending 1 messages
        1631885502853 5550/RMR [INFO] ric message routing library on SI95 p=43716 mv=3 flg=01 id=a (11838bc 4.7.4 built: Apr 27 2021)
        [INFO] RMR initialised
        [INFO] starting session with 0.0.0.0:38000, starting to send
        [INFO] connected to 0.0.0.0:38000, sending 1 pings
        [INFO] sending message: health check request prev=0 <eom>
        [FAIL] too few messages recevied during timeout window: wanted 1 got 0
        root@e2term-alpha:/opt/e2# ./rmr_probe -h 0.0.0.0:38000 -v -t 30
        [INFO] listen port: 43937; sending 1 messages
        1631885508786 5560/RMR [INFO] ric message routing library on SI95 p=43937 mv=3 flg=01 id=a (11838bc 4.7.4 built: Apr 27 2021)
        [INFO] RMR initialised
        [INFO] starting session with 0.0.0.0:38000, starting to send
        [INFO] connected to 0.0.0.0:38000, sending 1 pings
        [INFO] sending message: health check request prev=0 <eom>
        [INFO] got: (OK) state=0
        [INFO] good response received; elapsed time = 299968 mu-sec


        As you can see it's successful. 

        In the end I modified both liveness and readiness probe as such:


        livenessProbe:
          exec:
            command:
              - /bin/sh
              - -c
              - /opt/e2/rmr_probe -h 0.0.0.0:38000 -t 30
          failureThreshold: 5
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 30


        readinessProbe:
          exec:
            command:
              - /bin/sh
              - -c
              - /opt/e2/rmr_probe -h 0.0.0.0:38000 -t 30
          failureThreshold: 3
          initialDelaySeconds: 20
          periodSeconds: 60
          successThreshold: 1
          timeoutSeconds: 30


        Adjust it based on your need, for some reason I saw failures even with "-t 30". Hope it helps.

        1. Hi Federico,

          thank you very much.


          it worked for me in the Dawn release, but as you told sometimes the readiness and the liveliness probes fail anyway.

          Maybe i will try to increase failureTresholds for both the probes, but for the moment i commented ricxapp-ueec routes in the config map as you suggested and edited the deployment as follow:


          livenessProbe:
          exec:
          command:
          - /bin/sh
          - -c
          - /opt/e2/rmr_probe -h 0.0.0.0:38000 -t 30
          failureThreshold: 3
          initialDelaySeconds: 120
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 60
          ................................

          readinessProbe:
          exec:
          command:
          - /bin/sh
          - -c
          - /opt/e2/rmr_probe -h 0.0.0.0:38000 -t 30
          failureThreshold: 3
          initialDelaySeconds: 120
          periodSeconds: 60
          successThreshold: 1
          timeoutSeconds: 60

          _______________________________________________________


          I still have problems while deploying xApps with the Dawn release, at least the curl command tell me this:
          curl --location --request POST "http://$(hostname):32080/appmgr/ric/v1/xapps" --header 'Content-Type: application/json' --data-raw '{"xappName": "hwxapp"}' -v


          Note: Unnecessary use of -X or --request, POST is already inferred.
          * Trying 127.0.1.1...
          * TCP_NODELAY set
          * Connected to HOSTNAME (127.0.1.1) port 32080 (#0)
          > POST /appmgr/ric/v1/xapps HTTP/1.1
          > Host: HOSTNAME:32080
          > User-Agent: curl/7.58.0
          > Accept: */*
          > Content-Type: application/json
          > Content-Length: 22
          >
          * upload completely sent off: 22 out of 22 bytes
          < HTTP/1.1 501 Not Implemented
          < Content-Type: application/json
          < Content-Length: 56
          < Connection: keep-alive
          < Date: Fri, 17 Sep 2021 16:22:50 GMT
          < X-Kong-Upstream-Latency: 2
          < X-Kong-Proxy-Latency: 3
          < Via: kong/1.4.3
          <
          "operation XappDeployXapp has not yet been implemented"
          * Connection #0 to host HOSTNAME left intact


          Did you manage to install xApps or this feature is still unavailable for Dawn release?


          Thank you again

          1. Glad it worked for you, still doesn't explain why the probe fails or takes that much time that hits the timeouts, I've been trying to troubleshoot but haven't got to the bottom of it yet.

            The xApps deployment procedure changed in Dawn release. Follow instructions here: https://docs.o-ran-sc.org/projects/o-ran-sc-it-dep/en/latest/installation-guides.html#ric-applications  use the dms_cli and it will work. I was able to deploy xApps.

            1. Unfortunately I am also stuck in troubleshooting probe fails, i've not so much experience in k8s.

              And thank you again, i probably was missing a step, now i've managed to deploy xapps through dms_cli .

              Have a good time!


            2. hi Federico Rossi, thanks for linking the installation guides - they seem more thorough and up-to-date than these Bronze release instructions (although unfortunately they are quite hidden!).

              Given that your link seems quite detached from this official O-RAN Confluence site, I feel like I'm missing some useful pages that could help me get set up with O-RAN. If so, I would be very grateful if you could direct me to them.

              Much appreciated,

              Michael

              1. Hi Michael, the feeling is the same. I had to dig around. Here in the wiki you can find a lot of great knowledge. The documentation I linked is generated with Sphinx from the docs directory when you clone the "dep" repository:

                git clone https://gerrit.o-ran-sc.org/r/it/dep

                cd dep/docs


                I can't comment about the official documentation process for O-RAN SC. 

                What information/pages are you looking for?


                1. The installation guide you've already linked is the main sort of thing I'm looking for - essentially anything that will help me get set up with O-RAN so that I may research the implementation and hopefully contribute to the project.

                  1. Read about the architecture and individual components then go here: https://gerrit.o-ran-sc.org/r/admin/repos you can find all the components repos. The best place to search for "issues" or hidden info is here: https://jira.o-ran-sc.org/projects/ that's where the epics/issues are tracked. 

                    And since you are looking to contribute to the project, you can start from here: Project Developer Wiki

                    1. Thanks Federico, much appreciated!

                      1. Hi Michael, i also meet this problem. Have you found a solution? thank you.

                        Mr chen.

  20. Hello, I have installed RIC onto my linux machine, and I am about to put the SMO on a VM.  I've noticed people mention that they have two VM's  (one for RIC and one for SMO), and I was wondering if there is a particular reason for this?  Thanks.

    1. I have the exact same question...

    2. Hello again Mooga,

      One Question: Are you running the bronze release or the cherry release?

      Thank You.

      1. Hello, i'm running bronze release

        1. Thank you for your response!

          Since you have successfully installed RIC, in step 4 when you run this command:

          kubectl get pods -n ricplt

          You got 15 pods like me or 16 pods like the RIC guidelines?

          Because i am stuck in the Step 5 ... The following command doesnt produce any result and i don´t know if these issues could be related...

          curl --location--request POST "http://$(hostname):32080/onboard/api/v1/onboard/download"  --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"

          You followed all the guidelines with no problems?

          Thank you Mooga.

          1. I have 16 pods running, but I'm not sure if these problems are related or not.  Try using the address of the Kong controller rather than $(hostname).  This can be found using:

            kubectl get service -A | grep 32080

  21. Anonymous

    Hello all, 

    I am setting up the Cherry Release for the first time and working of Cherry branch.  Had the HELM Repository error when trying step 4 and got past that by adding the stable charts repository. Retried the deploy-ric-platform command and am getting this error at the end: 

    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    Error: render error in "alarmmanager/templates/serviceaccount.yaml": template: alarmmanager/templates/serviceaccount.yaml:20:11: executing "alarmmanager/templates/serviceaccount.yaml" at <include "common.serv...>: error calling include: template: no template "common.serviceaccountname.alarmmanager" associated with template "gotpl"

    Could anyone help with this please?

    1. I have looked into their repository.

      The problem is in cherry branch, the alarmmanager is called the other name, called "alarmadapter".

      So it can't find the name "alarmmanager", and can't pull it correctly.

      In the step1, you should directly pull the master branch, they already solved the problem in master branch.

      But they didn't merge into cherry branch.

      Or, another solution is change "alarmadapter" to "alrammanager" in ric-common/Common-Template/helm/ric-common/templates manually.

    2. There was an update in the discuss mailing list that addressed this issue. After cloning the cherry branch and updating the submodules, one can run this command to get the alarmmanager pod up and running:

      git fetch "https://gerrit.o-ran-sc.org/r/it/dep" refs/changes/68/5968/1 && git checkout FETCH_HEAD

      This was a patch so hopefully it still works.

    3. Anonymous

      Same problem,

      Can anyone help?

  22. Hi everyone,

    I know this question has already posted here.

    When i run the following command in Step 4:

    kubectl get pods -n ricplt


    Produces this result:


    root@rafaelmatos-VirtualBox:~/ric/dep# kubectl get pods -n ricplt

    NAME READY STATUS RESTARTS AGE

    deployment-ricplt-a1mediator-66fcf76c66-mlnb5 1/1 Running 2 3d2h

    deployment-ricplt-alarmadapter-64d559f769-5dgxh 1/1 Running 2 3d2h

    deployment-ricplt-appmgr-6fd6664755-zcdmr 1/1 Running 2 3d2h

    deployment-ricplt-e2mgr-8479fb5ff8-drvdz 1/1 Running 2 3d2h

    deployment-ricplt-e2term-alpha-bcb457df4-6tjlc 1/1 Running 3 3d2h

    deployment-ricplt-jaegeradapter-84558d855b-dj97n 1/1 Running 2 3d2h

    deployment-ricplt-o1mediator-d8b9fcdf-g7f2m 1/1 Running 2 3d2h

    deployment-ricplt-rtmgr-9d4847788-xwdcz 1/1 Running 4 3d2h

    deployment-ricplt-submgr-65dc9f4995-5vzwc 1/1 Running 2 3d2h

    deployment-ricplt-vespamgr-7458d9b5d-ppmrs 1/1 Running 2 3d2h

    deployment-ricplt-xapp-onboarder-546b86b5c4-52jjl 2/2 Running 4 3d2h

    r4-infrastructure-kong-6c7f6db759-8snpr 2/2 Running 5 3d2h

    r4-infrastructure-prometheus-alertmanager-75dff54776-pjr7j 2/2 Running 4 3d2h

    r4-infrastructure-prometheus-server-5fd7695-5k4hw 1/1 Running 2 3d2h

    statefulset-ricplt-dbaas-server-0 1/1 Running 2 3d2h

    All the pods are in status Running but i have only 15 pods and not 16.

    Is this a problem?? I am doing the configuration in Bronze release.

    Rafael Matos



  23. Hi everyone,

    I followed this article step by step, but I think I can't get the step 4 works correctly. As you can see the results form kubectl get pods -n ricplt one of the pods keeps crashing:

    NAME                                                         READY   STATUS             RESTARTS   AGE
    deployment-ricplt-a1mediator-66fcf76c66-lbzg9                1/1     Running            0          10m
    deployment-ricplt-alarmadapter-64d559f769-s576w              1/1     Running            0          9m42s
    deployment-ricplt-appmgr-6fd6664755-d99vn                    1/1     Running            0          11m
    deployment-ricplt-e2mgr-8479fb5ff8-b6gb9                     1/1     Running            0          11m
    deployment-ricplt-e2term-alpha-bcb457df4-llzd7               1/1     Running            0          10m
    deployment-ricplt-jaegeradapter-84558d855b-rpx4m             1/1     Running            0          10m
    deployment-ricplt-o1mediator-d8b9fcdf-hqkcc                  1/1     Running            0          9m54s
    deployment-ricplt-rtmgr-9d4847788-d756d                      1/1     Running            6          11m
    deployment-ricplt-submgr-65dc9f4995-zwndb                    1/1     Running            0          10m
    deployment-ricplt-vespamgr-7458d9b5d-99bwf                   1/1     Running            0          10m
    deployment-ricplt-xapp-onboarder-546b86b5c4-6qj82            2/2     Running            0          11m
    r4-infrastructure-kong-6c7f6db759-vw7pr                      1/2     CrashLoopBackOff   6          12m
    r4-infrastructure-prometheus-alertmanager-75dff54776-d2hcs   2/2     Running            0          12m
    r4-infrastructure-prometheus-server-5fd7695-46g4f            1/1     Running            0          12m
    statefulset-ricplt-dbaas-server-0                            1/1     Running            0          11m


    Step 5 doesn't seem to work while I have this problem. I have been looking for a solution on the internet but couldn't find any.

    I'll appreciate any help.

    1. Hello xlaxc,

      usually port 32080 is occupied by some kube service. you can check this by :

      $ kubectl get service -A | grep 32080

      #A work around is to use port forwarding :

      $ sudo kubectl port-forward r4-infrastructure-kong-6c7f6db759-vw7pr 32088:32080 -n ricplt &

      #remember to change "$(hostname):32080" to "localhost:32088" in subsequent commands. also make sure all pods are running


      1. Hello Litombe,

        Thanks for your help, I really appreciate it.

        The port forwarding solution doesn't solve the crashing problem for me (I am still having that CrashLoopBackOff status).

        I tried to run the command from step 5 anyway by executing:
        curl --location --request POST "http://localhost:32088/onboard/api/v1/onboard/download--header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"

        But I am getting this result:
        {"message":"no Route matched with those values"}

        1. Hello Xlaxc,

          I thought you were facing a problem with onboarding of the apps. My earlier solution was with regards to that.

          I recently faced the 'crashloopbackoff' problem in a different k8 cluster and this fixed it.

            rm -rf ~/.helm  
            helm init --stable-repo-url=https://charts.helm.sh/stable --client-only
            helm serve &
            helm repo add local http://127.0.0.1:8879

          #try this before step 4. it might fix your problems.

          1. Hello Litombe,

            Thanks for your reply.

            Unfortunately, that didn't solve my problem. I am starting to think that there is some network configuration in my VM that I am missing.

        2. Anonymous

          Hello xlaxc,

          Did you resolve?? this I'm also facing the same issue

    2. Hi, I think you have installed SMO in the same machine.

      SMO and RIC can not be installed in the same machine.

      You can see this from  Getting Started page's "Figure 1: Demo deployment architecture" picture.

      There are two VM to installed SMO and RIC

      1. Hi,
        Thanks for your reply,

        I didn't install SMO at all. For now, I only tried to install RIC, but I can't get it to work.


  24. mhz

    Hello everyone,

    I am stuck in step 3. I ran the following command ./k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh , and it is pending forever as it is shown below.

    I would appreciate some help. Thank you very much

    --------------------

    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    + NUMPODS=0
    + echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
    > waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
    + '[' 0 -lt 8 ']'
    + sleep 5
    ++ eval 'kubectl get pods -n kube-system  | grep "Running" | wc -l'
    +++ grep Running
    +++ wc -l
    +++ kubectl get pods -n kube-system
    W0601 15:30:40.385897   24579 loader.go:221] Config not found: /root/.kube/config
    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    + NUMPODS=0
    + echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
    > waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
    + '[' 0 -lt 8 ']'
    + sleep 5

    1. I had the same problem. Just check f there is any fatal error. In my case I allocated 1 CPU that was the problem.

  25. Anonymous

    Even I am having the same problem.


    1. mhz

      I tried to install Docker first and then follow the instructions and it worked.

      1. Anonymous

        I get the following error in the same step 3:


        Failed to pull image "quay.io/coreos/flannel:v0.14.0": rpc response from daemon: Get https://quay.io/v2/: Forbidden


        It only runs 5 of the 8 pods, so I get stuck there. Anyone knows how to solve it


    2. I was also stuck in the same issue and found below error

      Setting up docker.io (20.10.2-0ubuntu1~18.04.2) ...
      Job for docker.service failed because the control process exited with error code.
      See "systemctl status docker.service" and "journalctl -xe" for details.
      invoke-rc.d: initscript docker, action "start" failed.
      ● docker.service - Docker Application Container Engine
      Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
      Active: activating (auto-restart) (Result: exit-code) since Sat 2021-06-12 13:42:16 UTC; 9ms ago
      Docs: https://docs.docker.com
      Process: 13474 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
      Main PID: 13474 (code=exited, status=1/FAILURE)

      Then i have removed the docker as following

      • sudo apt remove docker.io
      • sudo apt purge docker.io
      • in case /var/lib/docker or /var/lib/docker.migating still exist, remove them


      After this steps i tried again and it was successful.

  26. Hello everyone,

    i got some error,

    ricxapp-trafficxapp-58d4946955-xrcll 0/1 ErrImagePull


    this ricxapp-trafficxapp keep error image pull, did anyone faced same issue?


    Br,

    Junior

  27. Hi everyone,

    I've installed the cherry release platform and have everything running well except onboarding the xApps.  I'm currently stuck on onboarding xApps using dms_cli.  The error that I am getting for the hw app and even the dummy xapp in the test folder is: "xApp Chart Name not found".  The error that I receive with the config file in the guide folder is: "Input validation failed".

    Is anyone else having these difficulties?  I've performed the dms_cli onboarding step on two different computers, but receiving the same error on both.  I've also used json scheme validators to make sure the xapp's config file is validated by the schma file and it passes.  

    The steps that I've performed are on this page: Installation Guides — it-dep master documentation (o-ran-sc.org)

    Any thoughts?

    Thank you!

  28. Anonymous

    Hello everyone,
    I`m having the issue when trying to run a1-ric.sh script from demo folder:

    output1
    {
        "error_message": "'xapp_name' is a required property",
        "error_source": "config-file.json",
        "status": "Input payload validation failed"
    }

    but qpdriver json template is fully valid:

    qpdriver.json config file
    {
          "xapp_name": "qpdriver",
          "version": "1.1.0",
          "containers": [
              {
                  "name": "qpdriver",
                  "image": {
                      "registry": "nexus3.o-ran-sc.org:10002",
                      "name": "o-ran-sc/ric-app-qp-driver",
                      "tag": "1.0.9"
                  }
              }
          ],
          "messaging": {
              "ports": [
                  {
                    "name": "rmr-data",
                    "container": "qpdriver",
                    "port": 4560,
                    "rxMessages": [
                      "TS_UE_LIST"
                    ],
                    "txMessages": [ "TS_QOE_PRED_REQ", "RIC_ALARM" ],
                    "policies": [20008],
                    "description": "rmr receive data port for qpdriver"
                  },
                  {
                    "name": "rmr-route",
                    "container": "qpdriver",
                    "port": 4561,
                    "description": "rmr route port for qpdriver"
                  }
              ]
          },
          "controls": {
              "example_int": 10000,
              "example_str": "value"
          },
          "rmr": {
              "protPort": "tcp:4560",
              "maxSize": 2072,
              "numWorkers": 1,
              "rxMessages": [
                "TS_UE_LIST"
              ],
              "txMessages": [
                "TS_QOE_PRED_REQ",
                "RIC_ALARM"
              ],
              "policies": [20008]
          }
    }

    What could the reason of such output?

  29. Hello Everyone!


    I followed this article step by step, but I think I can't get the step 4 works correctly. As you can see the results form kubectl get pods -n ricplt one of the pods keeps ImagePullBackOff and soon will be ErrImagePull.


    deployment-ricplt-a1mediator-74b45794bb-jvnbq 1/1 Running 0 3m53s

    deployment-ricplt-alarmadapter-64d559f769-szmc5 1/1 Running 0 2m41s

    deployment-ricplt-appmgr-6fd6664755-6lbsc 1/1 Running 0 4m51s

    deployment-ricplt-e2mgr-6df485fcc-zhxmg 1/1 Running 0 4m22s

    deployment-ricplt-e2term-alpha-76848bd77c-z7rdb 1/1 Running 0 4m8s

    deployment-ricplt-jaegeradapter-84558d855b-ft528 1/1 Running 0 3m9s

    deployment-ricplt-o1mediator-d8b9fcdf-zjgwf 1/1 Running 0 2m55s

    deployment-ricplt-rtmgr-57999f4bc4-qzkhk 1/1 Running 0 4m37s

    deployment-ricplt-submgr-b85bd46c6-4jql4 1/1 Running 0 3m38s

    deployment-ricplt-vespamgr-77cf6c6d57-vbtz5 1/1 Running 0 3m24s

    deployment-ricplt-xapp-onboarder-79866d9d9c-fnbw2 2/2 Running 0 5m6s

    r4-infrastructure-kong-6c7f6db759-rcfft 1/2 ImagePullBackOff 0 5m35s

    r4-infrastructure-prometheus-alertmanager-75dff54776-4rhqm 2/2 Running 0 5m35s

    r4-infrastructure-prometheus-server-5fd7695-bxkzb 1/1 Running 0 5m35s

    statefulset-ricplt-dbaas-server-0 1/1 Running 0 5m20s

    1. Try this

      "kubectl edit deploy r4-infrastructure-kong -n ricplt"

      and in line 68, change the original line into "image: kong/kubernetes-ingress-controller:0.7.0"

      Tbh, I don't know this is the appropriate way or not, but it can solve "ImagePullBackOff" problem.

      1. Anonymous

        Hi,

        Thanks for your reply, it totally works.

      2. Before deploying u can also make the below change

        in file "dep/ric-dep/helm/infrastructure/subcharts/kong/values.yaml"

        diff --git a/helm/infrastructure/subcharts/kong/values.yaml b/helm/infrastructure/subcharts/kong/values.yaml
        index 3d64d8c..37cff3f 100644
        --- a/helm/infrastructure/subcharts/kong/values.yaml
        +++ b/helm/infrastructure/subcharts/kong/values.yaml
        @@ -181,7 +181,7 @@ dblessConfig:
        ingressController:
        enabled: true
        image:
        - repository: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller
        + repository: kong/kubernetes-ingress-controller
        tag: 0.7.0

        # Specify Kong Ingress Controller configuration via environment variables

  30. Anonymous

    Hi experts,

    I tried to deploy Near-RT RIC with D release.

    After  I onboarding the hw xapp, then I ran this command:

    curl --location --request POST "http://127.0.0.1:32088/appmgr/ric/v1/xapps"  --header 'Content-Type: application/json' --data-raw '{"xappName": "hwxapp"}'

    (The port 32080  is occupied by the r4-infrastructure-kong-proxy, so I use port-forwarding.)

    And the output is

    "operation XappDeployXapp has not yet been implemented."

    It cannot create instance now. 

    So it indicates that the new appmgr may be under construction ?




    1. where did you pull D release ?
      master branch?

      1. Anonymous

        I just deploy near-rt ric platform with  dep/ric-dep/RECIPE/example_recipe_oran_dawn_release.yaml

        you can use 

        git submodule update --init --recursive --remote  to get the update file : ric-dep


        1. i also met this problem. i also deploy near-rt ric platform with  dep/ric- dep/RECIPE/example_recipe_oran_dawn_release.yaml

          but we can deploy the default example_recipe.yaml as the installation tutorial and the version acctually is d release.


          i still hope that somebody can solve the problem.

          1. According to the comment in third tutorial page, he said it can't deploy xapp with "curl" in dawn release.

            We need to use "dms_cli" to deploy xapp(tutorial: https://docs.o-ran-sc.org/projects/o-ran-sc-it-dep/en/latest/installation-guides.html#ric-applications), and the helm version needs above 3.

            I hope someone can explain why it can't use "curl" in dawn release when deploy xapp.

            And how you check it is d release with using the default recipe ?

            1. from my experience, it's caused by changing helm version from 2 to 3 for xApps
              im using now e release, and previously d release, im using dms_cli and don't have any problem with deploying xApps using this tool

              1. should both helm 2 and 3 work in parallel?
                or just migrate helm 2 to 3?

                1. helm2 for RIC/Influx instalation, then migrate to helm3 for xapps

                  1. Thanks for answering. It works well changing to helm3.
                    But does the migration affect original ricplt pods or not, because I remember there is some differences between helm 2 and 3.

                    1. no, there is no problem with helm3

  31. Anonymous

    Hello Team!

    When I am trying to verify the results of the step 4 with the comand kubectl get pods -n ricplt, one of all the pods of RIC have the status ImagePullBackOff, I am installing the cherry version. Do someone know how to fix it? Thank you all...


    NAME                                                         READY   STATUS             RESTARTS   AGE
    deployment-ricplt-a1mediator-55fdf8b969-v545f                1/1     Running            0          14m
    deployment-ricplt-appmgr-65d4c44c85-4kjk9                    1/1     Running            0          16m
    deployment-ricplt-e2mgr-95b7f7b4-7d4h4                       1/1     Running            0          15m
    deployment-ricplt-e2term-alpha-7dc47d54-c8gg4                0/1     CrashLoopBackOff   7          15m
    deployment-ricplt-jaegeradapter-7f574b5d95-v5gm5             1/1     Running            0          14m
    deployment-ricplt-o1mediator-8b6684b7c-vhlr8                 1/1     Running            0          13m
    deployment-ricplt-rtmgr-6bbdf7685b-sgvwc                     1/1     Running            2          15m
    deployment-ricplt-submgr-7754d5f8bc-kf6kk                    1/1     Running            0          14m
    deployment-ricplt-vespamgr-589bbb7f7b-zt4zm                  1/1     Running            0          14m
    deployment-ricplt-xapp-onboarder-7f6bf9bfb-4lrjz             2/2     Running            0          16m
    r4-infrastructure-kong-86bfd9f7c5-zkblz                      1/2     ImagePullBackOff   0          16m
    r4-infrastructure-kong-bdff668dd-fjr7k                       0/2     Running            1          8s
    r4-infrastructure-prometheus-alertmanager-54c67c94fd-nxn42   2/2     Running            0          16m
    r4-infrastructure-prometheus-server-6f5c784c4c-x54bb         1/1     Running            0          16m
    statefulset-ricplt-dbaas-server-0                            1/1     Running            0          16m



    1. Anonymous

      I just want how to solve the problem of the pod whose STATUS is CrashLoopBackOff

    2. Anonymous

      And the case where the pod`s description is CrashLoopBackOff??

      Do anyone knows how to solve it?

    3. For Kong, you will find answer above in comments.

      additionally, please find this command:
      KUBE_EDITOR="nano" kubectl edit deploy r4-infrastructure-kong -n ricplt.

      "azsx9015223@gmail.com

      Try this

      "kubectl edit deploy r4-infrastructure-kong -n ricplt"

      and in line 68, change the original line into "image: kong/kubernetes-ingress-controller:0.7.0"

      Tbh, I don't know this is the appropriate way or not, but it can solve "ImagePullBackOff" problem."

    4. Hi, I'm having the same issue with e2term-alpha. Did you manage to solve it?

      1. Try to change image for e2term-for to DAWN release version, maybe it will help you.
        D Release Docker Image List

        You can also update other modules with this images.

        1. I already tried to install it with Dawn recipe examples, which contain this images, but get the same error

  32. Anonymous

    Hello Team,

    I am trying to deploy the RIC in it's Cherry-Release Version.

    But i am facing some issues:

    Error: template: influxdb/templates/statefulset.yaml:19:11: executing "influxdb/templates/statefulset.yaml" at <include "common.fullname.influxdb" .>: error calling include: template: no template "common.fullname.influxdb" associated with template "gotpl"


    Can anyone help me out? Thanks for the solution


    1. Anonymous

      Hi, I am also facing the same error, when deploying the RIC in Cherry Release. 

      Any solution will help

      1. I was also facing the same problem. So instead of cherry or bronze release, we cloned master

        git clone http://gerrit.o-ran-sc.org/r/it/dep -b bronze 
        git clone http://gerrit.o-ran-sc.org/r/it/dep -b master


        With master release use the recipe file of cherry release. This error will not appear.

        1. Any update if this is fixed in cherry ? I am getting the same issue in master branch now. Any tip on the root cause will help  me to find a resolution. The problem that I am facing is that the persistent volume creation for "ricplt-influxdb-data-ricplt-influxdb-meta-0" never completes and stuck with status as "pending". Tried resizing the volume and setting storageClass: ""  and other similar changes but never got it up.

          Thanks,

          1. Hello, I also try to deploy the near-RT RIC with the Cherry version and face the same issue.

            Finally, I find the instructions in the document, which provided a solution to this issue:

            https://github.com/o-ran-sc/it-dep/blob/master/docs/ric/installation-k8s1node.rst

            Please find the corresponding steps in the section "Onetime setup for Influxdb".


            1. Anonymous

              I tried the "Onetime setup for Influxdb" and it did not work. Could anyone confirm that moving to master's did work? Thanks!

              1. I followed the onetime set up on master and still the error occurs, including for the alarmmanager. It also occurs for the dawn release example yaml (example_recipe_oran_dawn_release.yaml)

                ------

                Error: render error in "alarmmanager/templates/serviceaccount.yaml": template: alarmmanager/templates/serviceaccount.yaml:20:11: executing "alarmmanager/templates/serviceaccount.yaml" at <include "common.serviceaccountname.alarmmanager" .>: error calling include: template: no template "common.serviceaccountname.alarmmanager" associated with template "gotpl"
                Hang tight while we grab the latest from your chart repositories...
                ...Successfully got an update from the "local" chart repository
                ...Successfully got an update from the "stable" chart repository
                Update Complete.
                Saving 1 charts
                Downloading ric-common from repo http://127.0.0.1:8879/charts
                Deleting outdated charts
                Error: render error in "influxdb/templates/statefulset.yaml": template: influxdb/templates/statefulset.yaml:19:11: executing "influxdb/templates/statefulset.yaml" at <include "common.fullname.influxdb" .>: error calling include: template: no template "common.fullname.influxdb" associated with template "gotpl"

                1. Can anyone please help with setup of influxdb ...it remains in pending state.

                  And the "Onetime setup for Influxdb" didn't help.

                  1. I got a solution.! Three Files!

                    name :

                    influxdb-pv.yaml

                    content :

                    ---
                    apiVersion: v1
                    kind: PersistentVolume
                    metadata:
                      name: ricplt-influxdb
                      labels:
                        name: ricplt-influxdb
                    spec:
                      capacity:
                        storage: 20Gi
                      accessModes:
                        - ReadWriteMany
                      hostPath:
                        type: DirectoryOrCreate
                        path: /mnt/ricplt-influxdb-data

                    name: 

                    influxdb-pvc-sc.yaml

                    content:

                    apiVersion: v1
                    kind: PersistentVolumeClaim
                    metadata:
                      name: ricplt-influxdb-data-ricplt-influxdb-meta-0
                      namespace: ricplt
                    spec:
                      accessModes:
                        - ReadWriteMany
                      resources:
                        requests:
                          storage: 20Gi
                      selector:
                        matchExpressions:
                        - key: name
                          operator: In
                          values: ["ricplt-influxdb"]
                    ---
                    apiVersion: v1
                    kind: Service
                    metadata:
                      name: ricplt-influxdb-svc
                    spec:
                      ports:
                      - port: 8086
                        targetPort: 8086
                        name: ricplt-influxdb-meta-0
                      selector:
                        app: ricplt-influxdb-meta-0

                    name:

                    install_db.sh

                    content:

                    #/bin/bash
                    kubectl delete pvc ricplt-influxdb-data-ricplt-influxdb-meta-0 -n ricplt
                    kubectl apply -f influxdb-pv.yaml
                    kubectl apply -f influxdb-pvc-sc.yaml


                    end: 

                    run install_db.sh, if cannot delete pvc, use k8s force delete pvc, then rerun this shell. 

                    1. Hi, can you specify what are the path of these three files ? 

                      1. The same path is OK.

                        1. Anonymous

                          what's the path?


                    2. Thanks for 菜芽 solution. 

                      Although after running the install_db.sh , my influxdb pod is still pending,  i just add some content as below ( bold text ) and it works.

                      Please confirm  Onetime setup for Influxdb was done during installation before running the content.


                      name :

                      influxdb-pv.yaml

                      content :

                      apiVersion: v1
                      kind: PersistentVolume
                      metadata:
                        name: ricplt-influxdb
                        labels:
                          name: ricplt-influxdb
                      spec:
                        capacity:
                          storage: 20Gi
                        accessModes:
                          - ReadWriteMany

                        storageClassName: nfs
                        hostPath:
                          type: DirectoryOrCreate
                          path: /mnt/ricplt-influxdb-data


                      name: 

                      influxdb-pvc-sc.yaml

                      content:

                      apiVersion: v1
                      kind: PersistentVolumeClaim
                      metadata:
                        name: ricplt-influxdb-data-ricplt-influxdb-meta-0
                        namespace: ricplt
                      spec:
                        accessModes:
                          - ReadWriteMany
                        resources:
                          requests:
                            storage: 20Gi

                        volumeName: ricplt-influxdb
                        selector:
                          matchExpressions:
                          - key: name
                            operator: In
                            values: ["ricplt-influxdb"]
                      ---
                      apiVersion: v1
                      kind: Service
                      metadata:
                        name: ricplt-influxdb-svc
                      spec:
                        ports:
                        - port: 8086
                          targetPort: 8086
                          name: ricplt-influxdb-meta-0
                        selector:
                          app: ricplt-influxdb-meta-0

  33. Anonymous

    Hello,

    I am trying to run the following command:

    curl --location --request POST "http://localhost:32088/onboard/api/v1/onboard/download" --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"

    I'm always getting the following error:

    AttributeError: 'ConnectionError' object has no attribute 'message'

    Any idea about the solution?


    1. Same error occured.

      Do someone know how to fix it?

      1. Which release do you use?
        And does every pods operates normally?

        1. Cherry release. and every pods operates normally.

          I'm using proxy server, so I was typed 「curl --noproxy <address> 」

    2. Apparently the repository at https://gerrit.o-ran-sc.org/r/gitweb?p=ric-app/hw.git;a=blob_plain;f=init/config-file.json;hb=HEAD (pointed in the file ./onboard.hw.url) is taking some time to answer and that is enough to timeout the onboarding manager.
      I manually downloaded the content of the file and hosted on my own host (like https://dev-mind.blog/tmp/hwxapp.json) than you can assemble the request yourself as

      curl -v --request POST "http://10.0.2.15:32080/onboard/api/v1/onboard/download" --header 'Content-Type: application/json' --data '{ "config-file.json_url": "https://www.dev-mind.blog/tmp/hwxapp.json" }'

      PS.: I'll leave the file on this address in case someone want to try.

      1. Some times it seems to timeout on my server also... maybe a local copy would work best

  34. Hi! Can you run any version of the RIC in an ubuntu 20.04 machine? Thanks!!

  35. Hi all,

    i'm seeing only 15 pods and one of the pods is in error state after Step 4:  Deploy RIC using Recipe

    Could you please let me know what needs to be done to resolve this ?

    kubectl get pods -n ricplt
    NAME READY STATUS RESTARTS AGE
    deployment-ricplt-a1mediator-66fcf76c66-sgm8v 1/1 Running 0 20m
    deployment-ricplt-alarmadapter-64d559f769-vtr2r 1/1 Running 0 18m
    deployment-ricplt-appmgr-6fd6664755-tcqbj 1/1 Running 0 21m
    deployment-ricplt-e2mgr-8479fb5ff8-7d8vt 1/1 Running 0 20m
    deployment-ricplt-e2term-alpha-bcb457df4-j5j68 1/1 Running 0 20m
    deployment-ricplt-jaegeradapter-84558d855b-c47nk 1/1 Running 0 19m
    deployment-ricplt-o1mediator-d8b9fcdf-272v2 1/1 Running 0 18m
    deployment-ricplt-rtmgr-9d4847788-pt8hh 1/1 Running 2 21m
    deployment-ricplt-submgr-65dc9f4995-jdzfb 1/1 Running 0 19m
    deployment-ricplt-vespamgr-7458d9b5d-ww82v 1/1 Running 0 19m
    deployment-ricplt-xapp-onboarder-546b86b5c4-j74sg 2/2 Running 0 21m
    r4-infrastructure-kong-6c7f6db759-78glg 1/2 ImagePullBackOff 0 22m
    r4-infrastructure-prometheus-alertmanager-75dff54776-w8s2s 2/2 Running 0 22m
    r4-infrastructure-prometheus-server-5fd7695-jcxzn 1/1 Running 0 22m
    statefulset-ricplt-dbaas-server-0 1/1 Running 0 21m

    1. for Kong, please find info above in comments.

    2. Go here: https://gerrit.o-ran-sc.org/r/c/ric-plt/ric-dep/+/6502/1/helm/infrastructure/subcharts/kong/values.yaml


      You need to change the repository in the values.yaml file that is in /ric-dep/helm/infrastructure/subcharts/kong/

  36. Hi all,
    maybe somone met this problem or know solution, could you help?

    ~/dep/testappc# kubectl get pods --all-namespaces
    ricplt-influxdb-meta-0 0/1 Pending

    kubectl -n ricplt describe pod ricplt-influxdb-meta-0
    Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims

    Onetime setup for Influxdb was done during installation.

    1. Yes.

      # kubectl -n ricplt describe pod ricplt-influxdb-meta-0
      Name: ricplt-influxdb-meta-0
      Namespace: ricplt
      Priority: 0
      Node: <none>
      Labels: app.kubernetes.io/instance=r4-influxdb
      app.kubernetes.io/name=influxdb
      controller-revision-hash=ricplt-influxdb-meta-6fcf7b7df8
      statefulset.kubernetes.io/pod-name=ricplt-influxdb-meta-0
      Annotations: <none>
      Status: Pending
      IP:
      Controlled By: StatefulSet/ricplt-influxdb-meta
      Containers:
      ricplt-influxdb:
      Image: influxdb:2.0.8-alpine
      Ports: 8086/TCP, 8088/TCP
      Host Ports: 0/TCP, 0/TCP
      Liveness: http-get http://:api/ping delay=30s timeout=5s period=10s #success=1 #failure=3
      Readiness: http-get http://:api/ping delay=5s timeout=1s period=10s #success=1 #failure=3
      Environment: <none>
      Mounts:
      /etc/influxdb from config (rw)
      /var/lib/influxdb from ricplt-influxdb-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from service-ricplt-influxdb-http-token-bmgcl (ro)
      Conditions:
      Type Status
      PodScheduled False
      Volumes:
      ricplt-influxdb-data:
      Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
      ClaimName: ricplt-influxdb-data-ricplt-influxdb-meta-0
      ReadOnly: false
      config:
      Type: ConfigMap (a volume populated by a ConfigMap)
      Name: ricplt-influxdb
      Optional: false
      service-ricplt-influxdb-http-token-bmgcl:
      Type: Secret (a volume populated by a Secret)
      SecretName: service-ricplt-influxdb-http-token-bmgcl
      Optional: false
      QoS Class: BestEffort
      Node-Selectors: <none>
      Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
      node.kubernetes.io/unreachable:NoExecute for 300s
      Events:
      Type Reason Age From Message
      ---- ------ ---- ---- -------
      Warning FailedScheduling 89s (x34 over 50m) default-scheduler pod has unbound immediate PersistentVolumeClaims

      #####################################################



      1. Anonymous

        Hi, I am also facing the same problem!

        $ kubectl get pods -n ricplt

        ricplt-influxdb-meta-0 0/1 Pending

        $ kubectl describe pod/ricplt-influxdb-meta-0 -n ricplt

        Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims


        Could anyone help with this please?

        Thanks.

        1. i saw on instructions videos about RIC that this status "pending" is normal for this pod.

    2. Hello, I also try to deploy the near-RT RIC with the Cherry version and face the same issue.

      Finally, I find the instructions in the document, which provided a solution to this issue:

      https://github.com/o-ran-sc/it-dep/blob/master/docs/ric/installation-k8s1node.rst

      Please find the corresponding steps in the section "Onetime setup for Influxdb".

  37. had some issue with installation had to redo it...seeing issue withStep 3: Installation of Kubernetes, Helm, Docker, etc.

    it seems kubernetes-helm/helm-v2.12.3-linux-amd64.tar.gz is not available.  

    can we use a different version of helm ?


    + wget https://storage.googleapis.com/kubernetes-helm/helm-v2.12.3-linux-amd64.tar.gz--2021-08-10 04:41:15-- https://storage.googleapis.com/kubernetes-helm/helm-v2.12.3-linux-amd64.tar.gzResolving storage.googleapis.com (storage.googleapis.com)... 108.177.111.128, 108.177.121.128, 142.250.103.128, ...Connecting to storage.googleapis.com (storage.googleapis.com)|108.177.111.128|:443... connected.HTTP request sent, awaiting response... 404 Not Found2021-08-10 04:41:16 ERROR 404: Not Found.

    1. It looks like helm v2.12.3 is not in service.
      For anyone with this error, I suggest use v2.17.0 would solve the problem.
      You can change the version number in "/dep/tools/k8s/bin/k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh"

  38. Hi All.

    I got some trouble.

    after run STEP4 :

    ↓ kubectl get pod --all-namespaces

    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5d4dd4b4db-6dqls 1/1 Running 0 6d23h
    kube-system coredns-5d4dd4b4db-vz5qq 1/1 Running 0 6d23h
    kube-system etcd-20020420-0 1/1 Running 0 6d23h
    kube-system kube-apiserver-20020420-0 1/1 Running 0 6d23h
    kube-system kube-controller-manager-20020420-0 1/1 Running 0 6d23h
    kube-system kube-flannel-ds-k9kt2 1/1 Running 0 6d23h
    kube-system kube-proxy-6zng4 1/1 Running 0 6d23h
    kube-system kube-scheduler-20020420-0 1/1 Running 0 6d23h
    kube-system tiller-deploy-7c54c6988b-56brh 1/1 Running 0 6d23h
    ricinfra deployment-tiller-ricxapp-5d9b4c5dcf-7ksc9 0/1 ImagePullBackOff 0 6d23h
    ricinfra tiller-secret-generator-bfdrk 0/1 Completed 0 6d23h
    ricplt deployment-ricplt-a1mediator-55fdf8b969-bk8nr 1/1 Running 0 6d23h
    ricplt deployment-ricplt-appmgr-65d4c44c85-h64bf 1/1 Running 0 6d23h
    ricplt deployment-ricplt-e2mgr-95b7f7b4-5j9hm 1/1 Running 0 6d23h
    ricplt deployment-ricplt-e2term-alpha-7dc47d54-q9m74 1/1 Running 0 6d23h
    ricplt deployment-ricplt-jaegeradapter-7f574b5d95-7qqlw 1/1 Running 0 6d23h
    ricplt deployment-ricplt-o1mediator-8b6684b7c-gzcll 1/1 Running 0 6d23h
    ricplt deployment-ricplt-rtmgr-6bbdf7685b-wcd4f 0/1 CrashLoopBackOff 1259 6d23h
    ricplt deployment-ricplt-submgr-7754d5f8bc-8gsx6 1/1 Running 0 6d23h
    ricplt deployment-ricplt-vespamgr-589bbb7f7b-dnq98 1/1 Running 0 6d23h
    ricplt deployment-ricplt-xapp-onboarder-7f6bf9bfb-slwdx 2/2 Running 0 6d23h
    ricplt r4-infrastructure-kong-bdff668dd-xp8zb 2/2 Running 2 6d23h
    ricplt r4-infrastructure-prometheus-alertmanager-54c67c94fd-tkq67 2/2 Running 0 6d23h
    ricplt r4-infrastructure-prometheus-server-6f5c784c4c-29jpp 1/1 Running 0 6d23h
    ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 6d23h

    and NO 「deployment-ricplt-alarmadapter」,「ricplt-influxdb-data-ricplt-influxdb-meta-0」.


    Do someone know how to fix it?

    Thank you all.


    1. If u are trying with bronze release, shift to dawn release and try it


      git clone http://gerrit.o-ran-sc.org/r/it/dep -b dawn

      1. Hi Kota.

        I tried with Cherry release.

      2. I believe this is bad advice - there is no 'dawn' branch in the repo.

        https://gerrit.o-ran-sc.org/r/admin/repos/it/dep,branches

  39. i'm also facing the same issue infra is ricinfra deployment-tiller-ricxapp-5d9b4c5dcf-7ksc9 0/1 ImagePullBackOff 0 6d23h 

    i was able to onboard an xapp, but xapp installation is failing

    image.png

    However, I'm facing issue while installing the xapp.. Any suggestions on what needs to be done for the same ?


    image.png


    Thanks

    1. if it on dawn,  For the tiller,  GCR does not host the required version anymore. So, I copied the image from some older VM and then it came up properly.

    2. Hi Deena,

      This is caused by the missing of the image, you should do the modification of the xml.

      step 1.

      "kubectl edit deploy r4-infrastructure-kong -n ricplt"

      Step 2.

      edit line 68, change it to "image: kong/kubernetes-ingress-controller:0.7.0"

      This may solve your issue.

    3. Hi Deena.

      Finally I fixed it !!

      Step1:
      kubectl describe pod deployment-tiller-ricxapp -n ricinfra
      #Check your image version.

      Image: gcr.io/kubernetes-helm/tiller:v2.12.3
      #Check this line.

      Step2:
      docker images | grep tiller
      ghcr.io/helm/tiller v2.17.0 3f39089e9083 10 months ago 88.1MB
      #If not same version, follow Step3.

      Step3:
      kubectl edit deploy deployment-tiller-ricxapp -n ricinfra
      #Type your podname,namespace

      Step4:
      image: gcr.io/kubernetes-helm/tiller:v2.12.3
      #Edit this line, correct version(my case v2.17.0)

      Step5:
      #After 5minutes,
      kubectl get pods --all-namespaces
      #STATUS has changed ImagePullBackOff to Running

      :)


      1. Thanks for your solution !
        So is tiller v2.12.3 out of service?

        1. v2.12.3 wasn't working before it was fixed.

      2. Thanks !!

        But it didn't work for me .\

        i followed the steps suggested by you...still 

        kubectl get pods -n ricinfra
        NAME READY STATUS RESTARTS AGE
        deployment-tiller-ricxapp-d4f98ff65-c6qzq 0/1 ImagePullBackOff 0 14m
        deployment-tiller-ricxapp-d54c6ddc5-w7cj2 0/1 ImagePullBackOff 0 8m46s
        tiller-secret-generator-rnng9 0/1 Completed 0 14m

        Any further suggestions..

        1. what's your helm version?
          also check the tiller pods in namespace "kube-system" is running or not.

          1. hi,

            I also trying to install cherry version and have the same problem and the changes were not helped.

            the helm version is 2.17.0

            This is the current position of the pods:

            kube-system coredns-5644d7b6d9-76789 1/1 Running 1 135m

            kube-system coredns-5644d7b6d9-rw42w 1/1 Running 1 135m
            kube-system etcd-ip-172-31-35-51 1/1 Running 1 135m
            kube-system kube-apiserver-ip-172-31-35-51 1/1 Running 1 134m
            kube-system kube-controller-manager-ip-172-31-35-51 1/1 Running 1 135m
            kube-system kube-flannel-ds-tvmqj 1/1 Running 1 135m
            kube-system kube-proxy-bpq4l 1/1 Running 1 135m
            kube-system kube-scheduler-ip-172-31-35-51 1/1 Running 1 134m
            kube-system tiller-deploy-7d7bc87bb-rqj98 1/1 Running 1 134m
            ricinfra deployment-tiller-ricxapp-d4f98ff65-htckm 0/1 ImagePullBackOff 0 110m
            ricinfra deployment-tiller-ricxapp-d54c6ddc5-qc9qc 0/1 ImagePullBackOff 0 4m3s
            ricinfra tiller-secret-generator-v2tsq 0/1 Completed 0 110m
            ricplt deployment-ricplt-a1mediator-77959694b7-9z69l 1/1 Running 0 108m
            ricplt deployment-ricplt-appmgr-6bbcfd6bcb-qfvl7 1/1 Running 0 109m
            ricplt deployment-ricplt-e2mgr-7dc9dbb475-pppcr 1/1 Running 0 108m
            ricplt deployment-ricplt-e2term-alpha-c69456686-2whkx 1/1 Running 0 108m
            ricplt deployment-ricplt-jaegeradapter-76ddbf9c9-jxdbp 1/1 Running 0 107m
            ricplt deployment-ricplt-o1mediator-6f97d649cb-qqhb8 1/1 Running 0 107m
            ricplt deployment-ricplt-rtmgr-8546d559c6-7p6xn 0/1 CrashLoopBackOff 16 109m
            ricplt deployment-ricplt-submgr-5c5fb9f65f-sgdx7 1/1 Running 0 108m
            ricplt deployment-ricplt-vespamgr-dd97696fc-lkqd9 1/1 Running 0 107m
            ricplt deployment-ricplt-xapp-onboarder-5958856fc8-gsbk5 2/2 Running 0 109m
            ricplt r4-infrastructure-kong-646b68bd88-bjjnv 2/2 Running 1 8m45s
            ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-4kqnx 2/2 Running 0 110m
            ricplt r4-infrastructure-prometheus-server-5fd7695-rb5lt 1/1 Running 0 110m
            ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 109m

            thanks for helping



            1. Anonymous

              Hello,

              I'm in the same point as you Daniel. How did you do it to solve the problem?

              Thank you.

              1. Hi, please check this command "kubectl edit deploy deployment-tiller-ricxapp -n ricinfra".

                In line 53, change the image into "ghcr.io/helm/tiller:v2.17.0".

                1. Anonymous

                  Still it isn't working. Any other suggestions?

      3. I just found it doesn't not only simply change the version.
        Change the image (in line 53) into "ghcr.io/helm/tiller:v2.17.0".
        It works to me.

      4. Hi GeunHoe KIM,

        Thank you for your detailed answer. Finally, I can bring up the cluster.

        But I realize there is a minor change needed to be done. Please edit step 4: replace the old image with the one found in step 2. Specifically, edit as:

        ```
        ghcr.io/helm/tiller:v2.17.0
        ```

  40. Anonymous

    Do you guys have a Slack Channel ?

  41. Anonymous

    I am trying to deploy (RIC). Since I use the helm3 the above solutions for similar issue are not applicable.

    Note: I have tried with Bronze, Cherry and Amber. The error is the same.

    $ ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml

    Error: no repo named "local" found

    Error: looks like "http://127.0.0.1:8879/charts" is not a valid chart repository or cannot be reached: Get "http://127.0.0.1:8879/charts/index.yaml": dial tcp 127.0.0.1:8879: connect: connection refused

    ****************************************************************************************************************

    ERROR

    ****************************************************************************************************************

    Can't locate the ric-common helm package in the local repo. Please make sure that it is properly installed.

    ****************************************************************************************************************

    Can somebody help me?

    1. What's you output of this command ?

      $ helm repo list

  42. hi,

    I also trying to install cherry version and have the same problem and the changes were not helped.

    the helm version is 2.17.0

    This is the current position of the pods:

    kube-system coredns-5644d7b6d9-76789 1/1 Running 1 135m

    kube-system coredns-5644d7b6d9-rw42w 1/1 Running 1 135m
    kube-system etcd-ip-172-31-35-51 1/1 Running 1 135m
    kube-system kube-apiserver-ip-172-31-35-51 1/1 Running 1 134m
    kube-system kube-controller-manager-ip-172-31-35-51 1/1 Running 1 135m
    kube-system kube-flannel-ds-tvmqj 1/1 Running 1 135m
    kube-system kube-proxy-bpq4l 1/1 Running 1 135m
    kube-system kube-scheduler-ip-172-31-35-51 1/1 Running 1 134m
    kube-system tiller-deploy-7d7bc87bb-rqj98 1/1 Running 1 134m
    ricinfra deployment-tiller-ricxapp-d4f98ff65-htckm 0/1 ImagePullBackOff 0 110m
    ricinfra deployment-tiller-ricxapp-d54c6ddc5-qc9qc 0/1 ImagePullBackOff 0 4m3s
    ricinfra tiller-secret-generator-v2tsq 0/1 Completed 0 110m
    ricplt deployment-ricplt-a1mediator-77959694b7-9z69l 1/1 Running 0 108m
    ricplt deployment-ricplt-appmgr-6bbcfd6bcb-qfvl7 1/1 Running 0 109m
    ricplt deployment-ricplt-e2mgr-7dc9dbb475-pppcr 1/1 Running 0 108m
    ricplt deployment-ricplt-e2term-alpha-c69456686-2whkx 1/1 Running 0 108m
    ricplt deployment-ricplt-jaegeradapter-76ddbf9c9-jxdbp 1/1 Running 0 107m
    ricplt deployment-ricplt-o1mediator-6f97d649cb-qqhb8 1/1 Running 0 107m
    ricplt deployment-ricplt-rtmgr-8546d559c6-7p6xn 0/1 CrashLoopBackOff 16 109m
    ricplt deployment-ricplt-submgr-5c5fb9f65f-sgdx7 1/1 Running 0 108m
    ricplt deployment-ricplt-vespamgr-dd97696fc-lkqd9 1/1 Running 0 107m
    ricplt deployment-ricplt-xapp-onboarder-5958856fc8-gsbk5 2/2 Running 0 109m
    ricplt r4-infrastructure-kong-646b68bd88-bjjnv 2/2 Running 1 8m45s
    ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-4kqnx 2/2 Running 0 110m
    ricplt r4-infrastructure-prometheus-server-5fd7695-rb5lt 1/1 Running 0 110m
    ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 109m



    the describe of the pod gives those errors:

    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal Scheduled <unknown> default-scheduler Successfully assigned ricinfra/deployment-tiller-ricxapp-d54c6ddc5-qc9qc to ip-172-31-35-51
    Normal Pulling 42m (x4 over 44m) kubelet, ip-172-31-35-51 Pulling image "gcr.io/kubernetes-helm/tiller:v2.17.0"
    Warning Failed 42m (x4 over 44m) kubelet, ip-172-31-35-51 Failed to pull image "gcr.io/kubernetes-helm/tiller:v2.17.0": rpc error: code = Unknown desc = Error response from daemon: manifest for gcr.io/kubernetes-helm/tiller:v2.17.0 not found: manifest unknown: Failed to fetch "v2.17.0" from request "/v2/kubernetes-helm/tiller/manifests/v2.17.0".
    Warning Failed 42m (x4 over 44m) kubelet, ip-172-31-35-51 Error: ErrImagePull
    Normal BackOff 14m (x131 over 44m) kubelet, ip-172-31-35-51 Back-off pulling image "gcr.io/kubernetes-helm/tiller:v2.17.0"
    Warning Failed 3m55s (x175 over 44m) kubelet, ip-172-31-35-51 Error: ImagePullBackOff


    also i need to know how to solve the crashLoop in rtmgr

    this is what the describe function give:

    Name: deployment-ricplt-rtmgr-8546d559c6-7p6xn
    Namespace: ricplt
    Priority: 0
    Node: ip-172-31-35-51/172.31.35.51
    Start Time: Wed, 01 Sep 2021 09:47:37 +0000
    Labels: app=ricplt-rtmgr
    pod-template-hash=8546d559c6
    release=r4-rtmgr
    Annotations: <none>
    Status: Running
    IP: 10.244.0.17
    IPs:
    IP: 10.244.0.17
    Controlled By: ReplicaSet/deployment-ricplt-rtmgr-8546d559c6
    Containers:
    container-ricplt-rtmgr:
    Container ID: docker://9e81991d7f9b4ae6525190d84936abe7941426f93f98b813bcc53b18b58d36a7
    Image: nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-rtmgr:0.7.2
    Image ID: docker-pullable://nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-rtmgr@sha256:2c435042fcbced8073774bda0e581695b3a44170da249c91a33b8147bd7e485f
    Ports: 3800/TCP, 4561/TCP, 4560/TCP
    Host Ports: 0/TCP, 0/TCP, 0/TCP
    Command:
    /run_rtmgr.sh
    State: Running
    Started: Wed, 01 Sep 2021 12:26:36 +0000
    Last State: Terminated
    Reason: Completed
    Exit Code: 0
    Started: Wed, 01 Sep 2021 12:18:36 +0000
    Finished: Wed, 01 Sep 2021 12:21:28 +0000
    Ready: True
    Restart Count: 23
    Liveness: http-get http://:8080/ric/v1/health/alive delay=5s timeout=1s period=15s #success=1 #failure=3
    Readiness: http-get http://:8080/ric/v1/health/ready delay=5s timeout=1s period=15s #success=1 #failure=3
    Environment Variables from:
    configmap-ricplt-rtmgr-env ConfigMap Optional: false
    configmap-ricplt-dbaas-appconfig ConfigMap Optional: false
    Environment: <none>
    Mounts:
    /cfg from rtmgrcfg (ro)
    /var/run/secrets/kubernetes.io/serviceaccount from default-token-kr5jr (ro)
    Conditions:
    Type Status
    Initialized True
    Ready True
    ContainersReady True
    PodScheduled True
    Volumes:
    rtmgrcfg:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: configmap-ricplt-rtmgr-rtmgrcfg
    Optional: false
    default-token-kr5jr:
    Type: Secret (a volume populated by a Secret)
    SecretName: default-token-kr5jr
    Optional: false
    QoS Class: BestEffort
    Node-Selectors: <none>
    Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
    node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Warning BackOff 7m26s (x423 over 156m) kubelet, ip-172-31-35-51 Back-off restarting failed container

    thanks for helping

    1. hi did you solve the problem?

  43. I did a git clone of master(git clone http://gerrit.o-ran-sc.org/r/it/dep -b master) and deployed using the recipe file of cherry. I was able to onboard the hw-python xapp. 

    { "status": "Created"}root@instance-1:~/ric-app-hw-python/init# curl -X GET http://localhost:8080/api/charts | jq . % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8080: Connection refusedroot@instance-1:~/ric-app-hw-python/init# curl -X GET http://localhost:9090/api/charts | jq . % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 289 100 289 0 0 57800 0 --:--:-- --:--:-- --:--:-- 57800{ "hw-python": [ { "name": "hw-python", "version": "1.0.0", "description": "Standard xApp Helm Chart", "apiVersion": "v1", "appVersion": "1.0", "urls": [ "charts/hw-python-1.0.0.tgz" ], "created": "2021-09-02T16:06:17.722327489Z", "digest": "3d250253cb65f2e6f6aafaa3bfb64939b0ed177830a144d77408fc0e203518bd" } ]}


    Then  I used the below command for deploying the xapp. It failed and thr rtmgr crashed.

    curl -v --location --request POST "http://$(hostname):32
    080/appmgr/ric/v1/xapps" --header 'Content-Type: application/json' --data-raw '{"xappName": "hw-
    python", "helmVersion": "1.0.0"}' 


    NAME READY STATUS RESTARTS
    AGE
    deployment-ricplt-a1mediator-9fc67867-fz526 1/1 Running 0
    57m
    deployment-ricplt-alarmmanager-7685b476c8-7wl6r 1/1 Running 0
    56m
    deployment-ricplt-appmgr-6bbcfd6bcb-8tf84 1/1 Running 0
    59m
    deployment-ricplt-e2mgr-7dc9dbb475-wxfmx 1/1 Running 0
    58m
    deployment-ricplt-e2term-alpha-c69456686-ckcr9 1/1 Running 0
    58m
    deployment-ricplt-jaegeradapter-76ddbf9c9-dvkmk 1/1 Running 0
    57m
    deployment-ricplt-o1mediator-6f97d649cb-zgwcz 1/1 Running 0
    56m
    deployment-ricplt-rtmgr-8546d559c6-q264z 0/1 CrashLoopBackOff 10
    58m
    deployment-ricplt-submgr-5c5fb9f65f-ndd9h 1/1 Running 0
    57m
    deployment-ricplt-vespamgr-dd97696fc-ns2cc 1/1 Running 0
    57m
    deployment-ricplt-xapp-onboarder-5958856fc8-9jn5j 2/2 Running 0
    59m
    r4-infrastructure-kong-646b68bd88-gbz7c 2/2 Running 1
    59m
    r4-infrastructure-prometheus-alertmanager-75dff54776-pfkls 2/2 Running 0
    59m
    r4-infrastructure-prometheus-server-5fd7695-rgg9q 1/1 Running 0
    59m
    ricplt-influxdb-meta-0 0/1 Pending 0
    56m
    statefulset-ricplt-dbaas-server-0 1/1 Running 0
    59m

    1. Hello, have you solved the problem? I ran into her now too

  44. Hi,

    I am new to ORAN. I followed the steps to install the near RT RIC, but the master Kubernetes node stayed in,

    root@near-rt-ric:/home/ubuntu# kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    near-rt-ric NotReady master 44m v1.16.0
    root@near-rt-ric:/home/ubuntu#


    NotReady state forever, so the installation could not proceed. Could somebody point out anything wrong? I provisioned 4 CPU, 16G memory and 160G disk for the VM as requested.


    Regards,

    Harry




     


  45. Hi Guys,

    I finally got the near rt ric fully deployed with only one container, r4-infrastructure-kong, in ImagePullBackOff state, due to,

    docker pull kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.7.0
    Error response from daemon: error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>403 Forbidden</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>403 Forbidden</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"
     

    Could anybody tell me whether there is a fix or workaround for the issue? I an installing the bronze release following the instructions from this document. 

    kubectl describe,

    root@near-rt-ric:/home/ubuntu# kubectl describe pod r4-infrastructure-kong-6c7f6db759-q4c2b -n ricplt
    Name: r4-infrastructure-kong-6c7f6db759-q4c2b
    Namespace: ricplt
    Priority: 0
    Node: near-rt-ric/192.168.166.238
    Start Time: Mon, 13 Sep 2021 19:29:17 +0000
    Labels: app.kubernetes.io/component=app
    app.kubernetes.io/instance=r4-infrastructure
    app.kubernetes.io/managed-by=Tiller
    app.kubernetes.io/name=kong
    app.kubernetes.io/version=1.4
    helm.sh/chart=kong-0.36.6
    pod-template-hash=6c7f6db759
    Annotations: <none>
    Status: Pending
    IP: 10.244.0.11
    IPs:
    IP: 10.244.0.11
    Controlled By: ReplicaSet/r4-infrastructure-kong-6c7f6db759
    Containers:
    ingress-controller:
    Container ID:
    Image: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.7.0
    Image ID:
    Port: <none>
    Host Port: <none>
    Args:
    /kong-ingress-controller
    --publish-service=ricplt/r4-infrastructure-kong-proxy
    --ingress-class=kong
    --election-id=kong-ingress-controller-leader-kong
    --kong-url=https://localhost:8444
    --admin-tls-skip-verify
    State: Waiting
    Reason: ImagePullBackOff
    Ready: False
    Restart Count: 0
    Liveness: http-get http://:10254/healthz delay=5s timeout=5s period=10s #success=1 #failure=3
    Readiness: http-get http://:10254/healthz delay=5s timeout=5s period=10s #success=1 #failure=3
    Environment:
    POD_NAME: r4-infrastructure-kong-6c7f6db759-q4c2b (v1:metadata.name)
    POD_NAMESPACE: ricplt (v1:metadata.namespace)
    Mounts:
    /var/run/secrets/kubernetes.io/serviceaccount from r4-infrastructure-kong-token-7dckz (ro)
    proxy:
    Container ID: docker://ee52bf99859431b57452287eaeba7d5c5a4deaff751ab1c7fbe1273bb9ed9154
    Image: kong:1.4
    Image ID: docker-pullable://kong@sha256:b77490f37557628122ccfcdb8afae54bb081828ca464c980dadf69cafa31ff7c
    Ports: 8444/TCP, 32080/TCP, 32443/TCP, 9542/TCP
    Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
    State: Running
    Started: Mon, 13 Sep 2021 20:33:06 +0000
    Ready: True
    Restart Count: 0
    Liveness: http-get http://:metrics/status delay=5s timeout=5s period=10s #success=1 #failure=3
    Readiness: http-get http://:metrics/status delay=5s timeout=5s period=10s #success=1 #failure=3
    Environment:
    KONG_LUA_PACKAGE_PATH: /opt/?.lua;;
    KONG_ADMIN_LISTEN: 0.0.0.0:8444 ssl
    KONG_PROXY_LISTEN: 0.0.0.0:32080,0.0.0.0:32443 ssl
    KONG_NGINX_DAEMON: off
    KONG_NGINX_HTTP_INCLUDE: /kong/servers.conf
    KONG_PLUGINS: bundled
    KONG_ADMIN_ACCESS_LOG: /dev/stdout
    KONG_ADMIN_ERROR_LOG: /dev/stderr
    KONG_ADMIN_GUI_ACCESS_LOG: /dev/stdout
    KONG_ADMIN_GUI_ERROR_LOG: /dev/stderr
    KONG_DATABASE: off
    KONG_NGINX_WORKER_PROCESSES: 1
    KONG_PORTAL_API_ACCESS_LOG: /dev/stdout
    KONG_PORTAL_API_ERROR_LOG: /dev/stderr
    KONG_PREFIX: /kong_prefix/
    KONG_PROXY_ACCESS_LOG: /dev/stdout
    KONG_PROXY_ERROR_LOG: /dev/stderr
    Mounts:
    /kong from custom-nginx-template-volume (rw)
    /kong_prefix/ from r4-infrastructure-kong-prefix-dir (rw)
    /tmp from r4-infrastructure-kong-tmp (rw)
    /var/run/secrets/kubernetes.io/serviceaccount from r4-infrastructure-kong-token-7dckz (ro)
    Conditions:
    Type Status
    Initialized True
    Ready False
    ContainersReady False
    PodScheduled True
    Volumes:
    r4-infrastructure-kong-prefix-dir:
    Type: EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit: <unset>
    r4-infrastructure-kong-tmp:
    Type: EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit: <unset>
    custom-nginx-template-volume:
    Type: ConfigMap (a volume populated by a ConfigMap)
    Name: r4-infrastructure-kong-default-custom-server-blocks
    Optional: false
    r4-infrastructure-kong-token-7dckz:
    Type: Secret (a volume populated by a Secret)
    SecretName: r4-infrastructure-kong-token-7dckz
    Optional: false
    QoS Class: BestEffort
    Node-Selectors: <none>
    Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
    node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Warning Failed 10m (x1431 over 5h42m) kubelet, near-rt-ric Error: ImagePullBackOff
    Normal BackOff 31s (x1475 over 5h42m) kubelet, near-rt-ric Back-off pulling image "kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.7.0"
    root@near-rt-ric:/home/ubuntu# docker pull kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.7.0
    Error response from daemon: error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>403 Forbidden</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>403 Forbidden</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"
    root@near-rt-ric:/home/ubuntu#


    Regards,

    Harry

    1. The bintray.io repository is deprecated. This commit https://gerrit.o-ran-sc.org/r/c/ric-plt/ric-dep/+/6502 fixes the issue.

      In the meantime you can change helm/infrastructure/subcharts/kong/values.yaml and point the ingress controller image to:

      kong/kubernetes-ingress-controller

      1. Hi Federico,

        That fixed the problem. Thank you very much.

        May I ask whether I can follow the same procedure to deploy the Cherry release too?


        Regards,

        Harry


        1. Yes same procedure, the ingress-controller is the same version in Cherry.

  46. Anonymous

    Hello everyone,  I want to deploy dawn release. When I am using   git clone http://gerrit.o-ran-sc.org/r/it/dep -b dawn , i got  

    fatal: Remote branch dawn not found in upstream origin ,  please help me how can i get the dawn release

    1. No need of specifying the branch, you control which release is deployed using the RECIPES.

      dep/ric-dep/RECIPE_EXAMPLE

      $ ls
      example_recipe_oran_cherry_release.yaml example_recipe_oran_dawn_release.yaml example_recipe.yaml


      You use the branch when you want to download specific components and want to build the container images yourself.

      1. Federico Rossi So. which branch can I clone ? -b master or -b cherry or do not specify -branch name. I only want to deploy the dawn release. I have seen example_recipe_oran_dawn_release.yaml. After installation, is there any command so that we can check the installed release is dawn or not.

        1. Do not specify any -branch name. I am not aware of any command that would tell you the release you are running.

          You just need to make sure the images version matches the "Dawn" release: Near-RT RIC (D release)

  47. Anonymous

    Hello everyone, is there any command so that we can find out which release is currently deployed on my machine. is it cherry or dawn ? i took  git clone of branch master

  48. Hello everyone. I installed RIC   a few months ago and I did not receive any errors (16 pods were there). I started everything now and I am having problems. The problem is that some of the status are shown as "EVICTED". I have searched for it here but it seems nobody has faced this issue before. Do you have any idea?


    Thank you.

    1. Please post the output of kubectl get pods and for the "evicted" pods also a kubectl describe pod PODNAME

      1. Hi!


        Trying to deploy nearRT RIC, the pod e2term-alpha goes into CrashLoopBack, this is the log of e2term-alpha pod:


        environments service name is SERVICE_RICPLT_E2TERM_RMR_ALPHA_SERVICE_HOST
        
        service ip is 10.96.90.31
        
        nano=38000
        
        loglevel=debug
        
        volume=log
        
        #the key name of the environment holds the local ip address
        
        #ip address of the E2T in the RMR
        
        local-ip=10.96.90.31
        
        #prometheus mode can be pull or push
        
        prometheusMode=pull
        
        #timeout can be from 5 seconds to 300 seconds default is 10
        
        prometheusPushTimeOut=10
        
        prometheusPushAddr=127.0.0.1:7676
        
        prometheusPort=8088
        
        #trace is start, stop
        
        trace=start
        
        external-fqdn=10.96.90.31
        
        #put pointer to the key that point to pod name
        
        pod_name=E2TERM_POD_NAME
        
        sctp-port=36422
        
        ./startup.sh: line 16:    23 Illegal instruction     (core dumped) ./e2 -p config -f config.conf


        I'm using these versions: 

        Docker 19.03.6 (i've tried with version 20 also)

        Helm: 2.17

        kubernetes: 1.16


        Any idea why I get this error?



        1. Which release are you installing? 

          What's the e2term image version? 

          kubectl get pod deployment-ricplt-e2term-alpha-9c87d678-5s9bt -n ricplt -o jsonpath='{.spec.containers[0].image}'


          Change the pod name to match yours.

          1. Anonymous

            Hi, this is the version:


            nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2:5.4.9


            1. Interesting. You are running the Dawn image and e2 is segfaulting.. I suspect something related to the runtime environment. I am running the same image without issues. Can you please try with a newer kubernetes version? 

              In my case I am running Dawn on OpenShift and kubernetes version is 1.21



              1. Version 1.21 is not supported in the k8s-1node-cloud-init... but I tried v1.18 and I get the same error. Is there any way to debug this error? 

                Thanks for the answers.

                1. Finally managed to solve this! It was indeed a problem with my environment... 


                  To anyone using PROXMOX and getting CrashLoopBack with e2-term-alpha, you have to add the following line in the VM configuration file: 


                  args: -cpu host



                  You can find the config files inside the proxmox server,  /etc/pve/qemu-server 

                  1. Nicely done! thank you for sharing.

                  2. hello, i don't use PROXMOX. I deploy the ric on a true Ubuntu18.04 host. Could you have a solution about this problem?

                  3. Thanks Javi!

                    I had the same error when I trying to deploy the release e.

                  4. Can anyone solve this problem? I met this error when I use k8s 1.16. I also met this error when i use k8s 1.21. I run this in a true physical machine , not a vitual machine.

        2. hi , have you solved this problem?

  49. When installating "Dawn Release", if authentication and DefaultUser is enabled for InfluxDB, ORAN-SC NRTRIC's InfluxDB fails to come up with following error: "Error: release r4-influxdb failed: the server could not find the requested resource"


    But, the same helm changes work on the vanila influxdb helm charts provided by InfluxData at https://github.com/influxdata/helm-charts/tree/master/charts/influxdb.

    Kindly help in resolving this.

    Diff of InfluxDB helm chanages
    diff --git a/helm/3rdparty/influxdb/values.yaml b/helm/3rdparty/influxdb/values.yaml
    index 2b494a4..206e3a7 100644
    --- a/helm/3rdparty/influxdb/values.yaml
    +++ b/helm/3rdparty/influxdb/values.yaml
    @@ -55,7 +55,7 @@ service:
     ## Persist data to a persistent volume
     ##
     persistence:
    -  enabled: true
    +  enabled: false
       ## A manually managed Persistent Volume and Claim
       ## Requires persistence.enabled: true
       ## If defined, PVC must be created manually before volume will be bound
    @@ -102,7 +102,7 @@ enterprise:
     ## Defaults indicated below
     ##
     setDefaultUser:
    -  enabled: false
    +  enabled: true
    
       ## Image of the container used for job
       ## Default: appropriate/curl:latest
    @@ -239,7 +239,9 @@ config:
       retention: {}
       shard_precreation: {}
       monitor: {}
    -  http: {}
    +  http:
    +    flux-enabled: true
    +    auth-enabled: true
       logging: {}
       subscriber: {}
       graphite: {}
    1. Please set debug and provide ricplt-influxdb-meta-0 pod logs. On values.yaml in the config section:


      values.yaml
      config:
        logging:
          level: "debug"
      
      1. Hi Federico Rossi,
        Thanks for you reply. I tried above as well.

        But, the issue is that helm deployment itself fails the moment defaultuser authentication is enabled in the influxdb values.yaml. So, no influxdb pod logs are available. Helm deployment just gives the error "Error: release r4-influxdb failed: the server could not find the requested resource"

        root@ubuntu18-2:~/code/oransc/tmp/dep/bin# helm ls --failed
        NAME            REVISION        UPDATED                         STATUS  CHART           APP VERSION     NAMESPACE
        r4-influxdb     1               Wed Sep 22 10:14:19 2021        FAILED  influxdb-4.9.14 1.8.4           ricplt


        I am doubting the script "templates/post-install-set-auth.yaml" is failing to execute when both authentication and defaultuser is enabled. 

        values.yaml
         setDefaultUser:
        -  enabled: false
        +  enabled: true

        On the other hand, when authentication is enabled with defaultuser disabled, the influxdb comes up properly. But, it is of no use without the defaultuser credentials.

        Are you aware of any workaround/fix for this? 

        Regards,
        Ganesh Jaju

        1. From the ric-dep directory uninstall the chart:

          # ls
          3rdparty alarmmanager dbaas e2term infrastructure o1mediator rsm submgr xapp-onboarder
          a1mediator appmgr e2mgr influxdb jaegeradapter redis-cluster rtmgr vespam

          helm uninstall r4-influxdb 


          Install again add debug:


          helm --debug install r4-influxdb 3rdparty/influxdb/


          Will hopefully show you a little bit more details about the error. Let me know.


          1. What fails is this curl in the job on post-install-set-auth.yaml as you said:

            curl -X POST http://{{ include "common.fullname.influxdb" . }}:{{ include "common.serviceport.influxdb.http.bind_address" . | default 8086 }}/query \
            --data-urlencode \
            "q=CREATE USER \"${INFLUXDB_USER}\" WITH PASSWORD '${INFLUXDB_PASSWORD}' {{ .Values.setDefaultUser.user.privileges }}"


            Most likely because InfluxDB is not fully up when the job is running, we can tune the backoffLimit and deadline.

            By default deadline is configured to 300 seconds and backofflimit to 6.. so that should be plenty of time... the config is on values.yaml

            Just for testing, you can also try installing influxDB without the auth active once influx DB is active, exec on a container and do a manual curl to create the user with the password to see if it works.

            1. Hi Federico Rossi,
              I tried both of the suggestions from your end.


              1) When default user is disabled but authentication enabled, the influxdb comes up & we can manually curl to influxdb service ip to create the default admin user.

              After this the influxdb can be accessed and DBs created/deleted, etc. But, like you mentioned this is a temporary check/fix.

              # kubectl get services -A -o wide| grep influx
              default       ricplt-influxdb                             ClusterIP   10.96.42.99      <none>        8086/TCP,8088/TCP                 96s     app.kubernetes.io/instance=right-umbrellabird,app.kubernetes.io/name=influxdb
              
              # curl -XPOST http://10.96.42.99:8086/query --data-urlencode "q=create user admin with password 'admin' with all privileges"
              {"results":[{"statement_id":0}]}


              2) I tried playing with  activeDeadline and backoffLimit variables in values.yaml for the setDefaultUser. But, none seems to work.
              Somehow the pod for creating secret and the default user is failing.

              To give comparison with vanila helm charts from InfluxData team at https://github.com/influxdata/helm-charts/tree/master/charts/influxdb. If I use them, the following additional items are created and executed. The crashing pod keeps backing off untill it succeeds to set authentication and then exits successfully.

              Additional things in vanilla influxdb helm charts
              #kubectl get secret -A | grep influx
              default           exciting-kitten-influxdb-auth                           Opaque                                2      6s
              default           exciting-kitten-influxdb-token-6lmvj                    kubernetes.io/service-account-token   3      6s
              
              #kubectl get pods -A | grep -i influx
              NAMESPACE     NAME                                                         READY   STATUS             RESTARTS   AGE
              default       exciting-kitten-influxdb-0                                   0/1     Running            0          16s
              default       exciting-kitten-influxdb-set-auth-wjqbc                      0/1     CrashLoopBackOff   1          16s


              3) By default, dawn release installs helm version 2.17.0 and so I am using the same. I cant see much in the helm install output even with the debug option.

              helm debug install
              root@ubuntu18-2:~/code/oransc/tmp/dep/ric-dep/helm/3rdparty/influxdb# helm --debug install .
              [debug] Created tunnel using local port: '37099'
              
              [debug] SERVER: "127.0.0.1:37099"
              
              [debug] Original chart version: ""
              [debug] CHART PATH: /root/code/oransc/tmp/dep/ric-dep/helm/3rdparty/influxdb
              
              Error: release dining-marmot failed: the server could not find the requested resource


              Any further inputs/help is appreciated. 

              Regards,
              Ganesh Jaju

              1. I checked it. My error when activating the auth is different from yours, not sure is related but here my findings. Once you set setDefaultUser to true, it loads post-install-set-auth.yaml and secret.yaml 

                If we run:

                 helm template influx . -s templates/post-install-set-auth.yaml 

                and

                 helm template influx . -s templates/secret.yaml


                You'll see parsed template has apiVersion field where the comment line is. The reason is because the template is using '{{- }}' the '-' cause to strip all whitespaces and the next non-whitespace character. The fix is to change '{{-' to '{{'.


                post-install-set-auth.yaml

                From:

                {{- if .Values.setDefaultUser.enabled -}}
                apiVersion: batch/v1
                kind: Job

                To:

                {{ if .Values.setDefaultUser.enabled -}}
                apiVersion: batch/v1
                kind: Job


                secrets.yaml

                From:

                {{- if .Values.setDefaultUser.enabled -}}
                {{- if not (.Values.setDefaultUser.user.existingSecret) -}}
                apiVersion: v1
                kind: Secret
                
                

                To:

                {{ if .Values.setDefaultUser.enabled -}}
                {{- if not (.Values.setDefaultUser.user.existingSecret) -}}
                apiVersion: v1
                kind: Secret


                Try it and let me know if it works for you.

                1. Thanks a lot Federico Rossi. The fix worked. This should be pushed to Dawn branch too if possible.

  50. I compiled the new info and fixes that might help someone with the process. If you follow today (22-9-2021) the steps, some of the files in the repository do not work because some of the data is outdated (for instance links to images and versions on googleapi).

    This sequence worked for me:

    (be advised that some docker images take a bit of time to download. Hence, some pods might crash before everyone is ready. This is expected and once all the pods are properly initialised, all will be up and running).

    1. Make sure you have Ubuntu 18.04
    2.  follow steps 1 and 2 in the article
    3.  Before step 3:
      1. Edit the file tools/k8/bin/k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh 
      2.  change helm version from 2.12.3  to 2.17.0 
      3. change location of the helm image from
        https://storage.googleapis.com/kubernetes-helm/helm-v${HELMVERSION}-linux-amd64.tar.gz
        to 
        https://get.helm.sh/helm-v${HELMVERSION}-linux-amd64.tar.gz


    4. Edit the files ric-dep/helm/infrastructure/subcharts/kong/values.yaml and ric-aux/helm/infrastructure/subcharts/kong/values.yaml

      1. replace

        kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller

        by

        kong/kubernetes-ingress-controller 


    5. Edit the files ric-dep/helm/infrastructure/values.yaml and ric-dep/helm/appmgr/values.yaml
      1. replace lines 

        registry: gcr.io
        name: kubernetes-helm/tiller
        tag: v2.12.3

        by

        registry: ghcr.io
        name: helm/tiller
        tag: v2.17.0


    6. continue normally with step 3 and beyond

    Also, in the steps involving curl , you might need to use your local ip address instead of ${hostname} to make sure your local machine will answer without messing up the ports with the services.


    Hope it helps

    1. Anonymous

      Excellent!  It works for me. Thank you.

  51. I tried deploying master branch with dawn recipe file but the influxdb pod is in Pending state. Can someone help me fix this .


    NAME READY STATUS RESTARTS AGE
    deployment-ricplt-a1mediator-b4576889d-dqs2b 1/1 Running 7 7d6h
    deployment-ricplt-alarmmanager-f59846448-76tsl 1/1 Running 4 7d6h
    deployment-ricplt-appmgr-7cfbff4f7d-8gkmh 1/1 Running 4 7d6h
    deployment-ricplt-e2mgr-67f97459cd-4xjdz 1/1 Running 3 2d2h
    deployment-ricplt-e2term-alpha-849df794c-j6nhf 1/1 Running 3 2d2h
    deployment-ricplt-jaegeradapter-76ddbf9c9-r464v 1/1 Running 4 7d6h
    deployment-ricplt-o1mediator-f7dd5fcc8-dt9kq 1/1 Running 4 7d6h
    deployment-ricplt-rtmgr-7455599d58-np94f 1/1 Running 7 7d6h
    deployment-ricplt-submgr-6cd6775cd6-x8z74 1/1 Running 4 7d6h
    deployment-ricplt-vespamgr-757b6cc5dc-4vtzn 1/1 Running 4 7d6h
    deployment-ricplt-xapp-onboarder-5958856fc8-p8bjl 2/2 Running 8 7d6h
    r4-infrastructure-kong-7995f4679b-n65qm 2/2 Running 11 7d5h
    r4-infrastructure-prometheus-alertmanager-5798b78f48-xks4r 2/2 Running 8 7d6h
    r4-infrastructure-prometheus-server-c8ddcfdf5-55tf8 1/1 Running 4 7d6h
    ricplt-influxdb-meta-0 0/1 Pending 0 7d6h
    statefulset-ricplt-dbaas-server-0 1/1 Running 4 7d6h

    1. 柯俊先


      Hello, I also try to deploy the near-RT RIC with the Cherry version and face the same issue.

      Finally, I find the instructions in the document, which provided a solution to this issue:

      https://github.com/o-ran-sc/it-dep/blob/master/docs/ric/installation-k8s1node.rst

      Please find the corresponding steps in the section "Onetime setup for Influxdb".

  52. Hello!

    I'm trying to run the near-RT RIC on ARM-based instance. All the images were built from source on arm machine.
    After modifications to the helm charts, I was able to deploy the components to a local cluster:

    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5644d7b6d9-46w7s 1/1 Running 4 2d21h
    kube-system coredns-5644d7b6d9-g4qmk 1/1 Running 4 2d21h
    kube-system etcd-ip-10-0-0-113 1/1 Running 4 2d21h
    kube-system kube-apiserver-ip-10-0-0-113 1/1 Running 4 2d21h
    kube-system kube-controller-manager-ip-10-0-0-113 1/1 Running 5 2d21h
    kube-system kube-flannel-ds-4v86b 1/1 Running 4 2d21h
    kube-system kube-proxy-5mlzb 1/1 Running 4 2d21h
    kube-system kube-scheduler-ip-10-0-0-113 1/1 Running 4 2d21h
    kube-system tiller-deploy-7d5568dd96-zshl6 1/1 Running 4 2d21h
    ricinfra deployment-tiller-ricxapp-6b6b4c787-qnvbd 1/1 Running 0 20h
    ricinfra tiller-secret-generator-ls4lp 0/1 Completed 0 20h
    ricplt deployment-ricplt-a1mediator-6555b9886b-r2mll 1/1 Running 0 20h
    ricplt deployment-ricplt-alarmmanager-59cc7d4cb-kmkp4 1/1 Running 0 20h
    ricplt deployment-ricplt-appmgr-5fbcf5c7f7-ndzwm 1/1 Running 0 20h
    ricplt deployment-ricplt-e2mgr-5d9b49f784-qj2p9 1/1 Running 0 20h
    ricplt deployment-ricplt-e2term-alpha-645778bdc8-cj79n 0/1 CrashLoopBackOff 240 20h
    ricplt deployment-ricplt-jaegeradapter-c68c6b554-mlf7j 1/1 Running 0 20h
    ricplt deployment-ricplt-o1mediator-86587dd94f-7mjr8 1/1 Running 0 20h
    ricplt deployment-ricplt-rtmgr-67c9bdccf6-9d7v5 1/1 Running 3 20h
    ricplt deployment-ricplt-submgr-6ffd499fd5-wktgv 1/1 Running 0 20h
    ricplt deployment-ricplt-vespamgr-68b68b78db-9tpm8 1/1 Running 0 20h
    ricplt deployment-ricplt-xapp-onboarder-7c5f47bc8-fl9sf 2/2 Running 0 20h
    ricplt r4-infrastructure-kong-84cd44455-l9tmc 2/2 Running 2 20h
    ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-2th9v 2/2 Running 0 20h
    ricplt r4-infrastructure-prometheus-server-5fd7695-2vjrk 1/1 Running 0 20h
    ricplt ricplt-influxdb-meta-0 0/1 Pending 0 20h
    ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 20h


    I'm struggling to get e2term pod working with the following error (from kubectl logs):

    environments service name is SERVICE_RICPLT_E2TERM_RMR_ALPHA_SERVICE_HOST
    service ip is 10.x.x.248
    nano=38000
    loglevel=error
    volume=log
    #The key name of the environment holds the local ip address
    #ip address of the E2T in the RMR
    local-ip=10.x.x.248
    #prometheus mode can be pull or push
    prometheusMode=pull
    #timeout can be from 5 seconds to 300 seconds default is 10
    prometheusPushTimeOut=10
    prometheusPushAddr=127.0.0.1:7676
    prometheusPort=8088
    #trace is start, stop
    trace=stop
    external-fqdn=10.102.98.248
    #put pointer to the key that point to pod name
    pod_name=E2TERM_POD_NAME
    sctp-port=36422
    ./startup.sh: line 16: 24 Segmentation fault (core dumped) ./e2 -p config -f config.conf

    The 10.x.x.248 ip is the same for rservice-ricplt-e2term-rmr-alpha, but the pod keeps crushing. 

    Have anyone faced a similar issue?

    Thanks.

    1. What's your runtime environment? Are you running it on VMs? If yes which platform?

      If you scroll up in the comments another user had a similar issue with coredump on E2, specifically when using Proxmox Virtualization was to set the cpu type to 'host' this might be a similar issue. 

      Check first that, will take it from there for further troubleshooting

      1. It's an AWS arm-based instance with ubuntu 18.04

        Helm 2.17
        Kube 1.16
        Docker 20.10

        1. A quick observation, do you have SCTP kernel module loaded?

          1. Do you mean libstcp-dev? 
            If so - it wasn't installed., just added it.

            1. Not just just that, I mean the actual kernel module:

              lsmod | grep sctp
              xt_sctp 16384 5
              sctp 405504 304
              libcrc32c 16384 7 nf_conntrack,nf_nat,openvswitch,nf_tables,xfs,libceph,sctp

              1. Enabled the modules and redeployed the platform. Didn't help.

                lsmod | grep sctp
                sctp 385024 40
                libcrc32c 16384 6 nf_conntrack,nf_nat,btrfs,raid456,ip_vs,sctp

                1. How did you build the e2term container?

                  git clone https://gerrit.o-ran-sc.org/r/ric-plt/e2mgr

                  And you built it from the Dockerfile?
                  The Dockerfile calls the script build-e2mgr-ubuntu.sh using amd64 packages for RMR, you may have to compile RMR/NNG for ARM then change the Dockerfile accordingly to copy the library in the container.


                  1. Yes, I've rebuilt all the bldr-images and all the components for arm and changed the Dockerfiles/scripts.

                    Here is the build part of the Dockerfile for e2term


                    FROM localhost:5000/bldr-ubuntu18-c-go:1.9.0 as ubuntu

                    WORKDIR /opt/e2/

                    ARG BUILD_TYPE="Debug"
                    #"Release"

                    RUN apt-get update
                    RUN apt-get install -y lcov
                    RUN mkdir -p /opt/e2/RIC-E2-TERMINATION/ \
                    && mkdir -p /opt/e2/RIC-E2-TERMINATION/TEST/T1 \
                    && mkdir -p /opt/e2/RIC-E2-TERMINATION/TEST/T2 \
                    && mkdir -p /opt/e2/RIC-E2-TERMINATION/3rdparty

                    #RUN apt-get install -y libgtest-dev lcov
                    #RUN cd /usr/src/gtest && /usr/local/bin/cmake CMakeLists.txt && make && cp *.a /usr/lib

                    COPY librmr_si.so.4.7.4 /usr/local/lib/.
                    COPY librmr_si.so.4 /usr/local/lib/.
                    COPY librmr_si.so /usr/local/lib/.
                    COPY . /opt/e2/RIC-E2-TERMINATION/
                    COPY rmr-dev_4.7.4_aarch64.deb .
                    COPY rmr_4.7.4_aarch64.deb .
                    COPY mdclog_0.1.4-1_arm64.deb .
                    COPY mdclog-dev_0.1.4-1_arm64.deb .

                    RUN apt-get install -y libgtest-dev lcov
                    RUN cd /usr/src/gtest && /usr/local/bin/cmake CMakeLists.txt && make && cp *.a /usr/lib
                    RUN apt-get update && apt-get install -y libcurl4-gnutls-dev gawk libtbb-dev libtbb-doc libtbb2 libtbb2-dbg \
                    && apt-get install -y python3 python3-pip python3-setuptools python3-wheel ninja-build \
                    && pip3 install meson \
                    && mv /opt/e2/RIC-E2-TERMINATION/CMakeLists.txt /opt/e2/ && cat /opt/e2/RIC-E2-TERMINATION/config/config.conf \
                    && dpkg -i mdclog_0.1.4-1_arm64.deb \
                    && dpkg -i mdclog-dev_0.1.4-1_arm64.deb \
                    && wget --content-disposition https://answers.launchpad.net/ubuntu/+source/glibc/2.29-0ubuntu2/+build/16599429/+files/libc6_2.29-0ubuntu2_arm64.deb \
                    && dpkg -i libc6_2.29-0ubuntu2_arm64.deb && wget --content-disposition http://ports.ubuntu.com/pool/universe/c/cgreen/libcgreen1_1.3.0-2_arm64.deb \
                    && dpkg -i libcgreen1_1.3.0-2_arm64.deb && wget --content-disposition http://old.kali.org/kali/pool/main/c/cgreen/cgreen1_1.3.0-2_arm64.deb \
                    && dpkg -i cgreen1_1.3.0-2_arm64.deb \
                    && wget --content-disposition http://ports.ubuntu.com/pool/universe/c/cgreen/libcgreen1-dev_1.3.0-2_arm64.deb && dpkg -i libcgreen1-dev_1.3.0-2_arm64.deb \
                    && dpkg -i rmr_4.7.4_aarch64.deb && dpkg -i rmr-dev_4.7.4_aarch64.deb \
                    && cd /opt/e2/RIC-E2-TERMINATION/3rdparty && git clone https://github.com/pistacheio/pistache.git && cd pistache \
                    && meson setup build --buildtype=release -DPISTACHE_USE_SSL=false -DPISTACHE_BUILD_EXAMPLES=false -DPISTACHE_BUILD_TESTS=false -DPISTACHE_BUILD_DOCS=false --prefix=/usr/local \
                    && meson compile -C build && meson install -C build && ldconfig \
                    && cd /opt/e2/RIC-E2-TERMINATION/3rdparty && git clone -v https://github.com/jupp0r/prometheus-cpp.git \
                    && cd prometheus-cpp && git submodule init && git submodule update && mkdir build && cd build \
                    && cmake .. -DBUILD_SHARED_LIBS=OFF && make -j 4 && make install && ldconfig \
                    && cd /opt/e2/RIC-E2-TERMINATION/3rdparty && git clone https://github.com/jarro2783/cxxopts.git \
                    && cd /opt/e2/RIC-E2-TERMINATION/3rdparty && git clone https://github.com/Tencent/rapidjson.git \
                    && cd /opt/e2/RIC-E2-TERMINATION/3rdparty && git clone https://github.com/zeux/pugixml.git \
                    && cd /opt/e2/ && git clone https://github.com/bilke/cmake-modules.git \
                    && cd /opt/e2/ && /usr/local/bin/cmake -D CMAKE_BUILD_TYPE=$BUILD_TYPE . && make \
                    && echo "3" > /opt/e2/rmr.verbose



                    #RUN apt-get update && apt-get install -f libssl-dev -y
                    #RUN apt-get -y install python3-pip python3-openssl
                    #RUN pip3 install meson openssl-python
                    #RUN ln -s /usr/local/bin/meson /usr/bin/meson

                    #RUN git clone https://github.com/google/googletest && cd googletest && mkdir build && cd build && cmake .. && make && make install
                    #RUN git clone https://github.com/pistacheio/pistache.git && cd pistache && meson setup build --buildtype=release -DPISTACHE_USE_SSL=true -DPISTACHE_BUILD_EXAMPLES=true -DPISTACHE_BUILD_TESTS=true -DPISTACHE_BUILD_DOCS=false --prefix=$PWD/prefix \
                    # && meson compile -C build && meson install -C build

                    #RUN cd /opt/e2/RIC-E2-TERMINATION/3rdparty && git clone -v https://github.com/jupp0r/prometheus-cpp.git \
                    # && wget --content-disposition http://ports.ubuntu.com/pool/universe/c/cgreen/libcgreen1-dev_1.2.0-2_arm64.deb && dpkg -i libcgreen1-dev_1.2.0-2_arm64.deb \
                    # && wget --content-disposition http://ports.ubuntu.com/pool/universe/c/cgreen/libcgreen1_1.2.0-2_arm64.deb && dpkg -i libcgreen1_1.2.0-2_arm64.deb \
                    # && cd prometheus-cpp && git submodule init && git submodule update && mkdir build && cd build \
                    # && cmake .. -DCMAKE_BUILD_TYPE=UBSAN -DCMAKE_BUILD_TYPE=asan -DCMAKE_BUILD_TYPE=lsan -DCMAKE_BUILD_TYPE=msan -DBUILD_SHARED_LIBS=OFF && CFLAGS="-fsanitize=address" make -j 4 && make install && ldconfig \
                    # && cd /opt/e2/RIC-E2-TERMINATION/3rdparty && git clone https://github.com/jarro2783/cxxopts.git \
                    # && cd /opt/e2/RIC-E2-TERMINATION/3rdparty && git clone https://github.com/Tencent/rapidjson.git \
                    # && cd /opt/e2/RIC-E2-TERMINATION/3rdparty && git clone https://github.com/zeux/pugixml.git \
                    # && cd /opt/e2/ && git clone https://github.com/bilke/cmake-modules.git \
                    # && cd /opt/e2/ && /usr/local/bin/cmake -DCMAKE_BUILD_TYPE=UBSAN -DCMAKE_BUILD_TYPE=asan -DCMAKE_BUILD_TYPE=lsan -DCMAKE_BUILD_TYPE=msan -D CMAKE_BUILD_TYPE=$BUILD_TYPE . && CFLAGS="-fsanitize=address" make \
                    # && echo "3" > /opt/e2/rmr.verbose


                    RUN if [$BUILD_TYPE == "Debug"] ; then make e2_coverage ; fi

                    # && git clone http://gerrit.o-ran-sc.org/r/ric-plt/tracelibcpp \
                    # && cd tracelibcpp && mkdir build && cd build \
                    # && sed -i '19iset\(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O3"\)' ../CMakeLists.txt \
                    # && sed -i '19iset\(CMAKE_CXX_STANDARD 17\)' ../CMakeLists.txt\
                    # && /usr/local/bin/cmake .. && make && cd .. && cp include/tracelibcpp/tracelibcpp.hpp . && cd .. \
                    # && mkdir nlohmann && cd nlohmann && find / -type f -name "json.hpp" -exec cp {} json.hpp \; && cd .. \
                    # && find / -type d -name "opentracing" -exec cp -r {} . \; \
                    # && cd /usr/local/lib/ && find / -type f -name "libyaml-cpp.a" -exec cp {} libyaml-cpp.a \; \
                    # && find / -type f -name "libopentracing.a" -exec cp {} libopentracing.a \; && cd /opt/e2/RIC-E2-TERMINATION && ls nlohmann


                    1. Alright, time to get that coredump and see what's going on. Can you please recompile and set the CMAKE_BUILD_STRIP flag to false when using cmake and in the Dockerfile also create a dir where to write the coredumps. Then run docker run ( no need of the kubernetes cluster just run the container) mount the volume for the coredumps and set two flags in the docker run command line: –sysctl kernel.core_pattern="/cores/core.%p" --ulimit core=-1


                      It's not the step by step instructions but I hope is clear, if you need further help let me know.

                      The idea is to get the coredump and run gdb in order to see which library is causing the segfault. We can take it from there.

                      1. I've rebuilt the image with CMAKE_BUILD_STRIP and modified the Dockerfile. Still no luck of getting the core dump.

                        root@ip-10-0-0-83:/home/ubuntu# docker run -d -t -v /home/ubuntu/cores:/cores --sysctl kernel.core_pattern="/cores/core.%p" --ulimit core=-1 ric-plt-e2-core /bin/bash
                        invalid argument "kernel.core_pattern=/cores/core.%p" for "--sysctl" flag: sysctl 'kernel.core_pattern=/cores/core.%p' is not whitelisted

                        I've also tried running as --privileged , still no luck.

                        Also, if I run it without those two flags - it runs, but startup.sh doesn't work as it doesn't see the service.

                        1. It uses the RMR_SRC_ID environment variable pointing to the service for the RMR interface on E2term

                          Add to docker run: 

                          --env="RMR_SRC_ID=127.0.0.1"


                          It should be able to initialize and segfault.

                          1. Still missing the local_ip

                            root@03ecb9f74ff7:/opt/e2# ./startup.sh
                            environments service name is 127_SERVICE_HOST
                            service ip is
                            nano=38000
                            loglevel=error
                            volume=log
                            #The key name of the environment holds the local ip address
                            #ip address of the E2T in the RMR
                            local-ip=
                            #prometheus mode can be pull or push
                            prometheusMode=pull
                            #timeout can be from 5 seconds to 300 seconds default is 10
                            prometheusPushTimeOut=10
                            prometheusPushAddr=127.0.0.1:7676
                            prometheusPort=8088
                            #trace is start, stop
                            trace=stop
                            external-fqdn=
                            #put pointer to the key that point to pod name
                            pod_name=E2TERM_POD_NAME
                            sctp-port=36422
                            {"ts":1632845832084,"crit":"ERROR","id":"E2Terminator","mdc":{"PID":"37","POD_NAME":"","CONTAINER_NAME":"","SERVICE_NAME":"","HOST_NAME":"","SYSTEM_NAME":""},"msg":"problematic entry: local-ip= no value "}
                            {"ts":1632845832084,"crit":"ERROR","id":"E2Terminator","mdc":{"PID":"37","POD_NAME":"","CONTAINER_NAME":"","SERVICE_NAME":"","HOST_NAME":"","SYSTEM_NAME":""},"msg":"problematic entry: external-fqdn= no value "}
                            {"ts":1632845832084,"crit":"ERROR","id":"E2Terminator","mdc":{"PID":"37","POD_NAME":"","CONTAINER_NAME":"","SERVICE_NAME":"","HOST_NAME":"","SYSTEM_NAME":""},"msg":"illegal local-ip."}

                            1. Create a config.conf and map it to the container, so you can control those "dynamic" variables 

                              1. Okay, I was able to get the core dump from e2:

                                root@ip-10-0-0-83:/home/ubuntu/cores# gdb --core=core.1632910168.e2.23
                                GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1
                                Copyright (C) 2018 Free Software Foundation, Inc.
                                License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
                                This is free software: you are free to change and redistribute it.
                                There is NO WARRANTY, to the extent permitted by law. Type "show copying"
                                and "show warranty" for details.
                                This GDB was configured as "aarch64-linux-gnu".
                                Type "show configuration" for configuration details.
                                For bug reporting instructions, please see:
                                <http://www.gnu.org/software/gdb/bugs/>.
                                Find the GDB manual and other documentation resources online at:
                                <http://www.gnu.org/software/gdb/documentation/>.
                                For help, type "help".
                                Type "apropos word" to search for commands related to "word".
                                [New LWP 23]
                                Core was generated by `./e2 -p config -f config.conf'.
                                Program terminated with signal SIGSEGV, Segmentation fault.
                                #0 0x0000ffff99fa0b90 in ?? ()
                                (gdb) bt
                                #0 0x0000ffff99fa0b90 in ?? ()
                                #1 0x0000ffff99fa2370 in ?? ()
                                Backtrace stopped: previous frame identical to this frame (corrupt stack?)

                                1. Still missing symbols. Can you do a "file e2" on the binary? what's the output?

                                  Looks like we need to add debug flag on the compiler too.

                                  Where it says cmake and CFLAGS change the variable like this:


                                  CFLAGS="-fsanitize=address -g -O0"


                                  -g add debug information to the binary.

                                  -O0 compile without optimization


                                  Recompile/build and run and check the coredump.

                                  1. Here is a file output:
                                    e2: ELF 64-bit LSB shared object, ARM aarch64, version 1 (GNU/Linux), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, for GNU/Linux 3.7.0, BuildID[sha1]=b477af9dcbe5440fe54d19a51c92ffe5de0bc2a1, with debug_info, not stripped


                                    After adding the flags and rebuilding I've got more info:

                                    root@ebee07490b7c:/opt/e2# ./e2 -p config -f config.conf
                                    ==119==ASan runtime does not come first in initial library list; you should either link runtime to your application or manually preload it with LD_PRELOAD.


                                    1. Federico Rossi

                                      Here is the output with preloaded libasan.so.4

                                      root@624eedeaed22:/opt/e2# ./e2 -p config -f config.conf
                                      ASAN:DEADLYSIGNAL
                                      =================================================================
                                      ==25==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0xffffbd841b90 bp 0xffffeb25a0e0 sp 0xffffeb25a0e0 T0)
                                      ==25==The signal is caused by a READ memory access.
                                      ==25==Hint: address points to the zero page.
                                      #0 0xffffbd841b8f in index (/lib/aarch64-linux-gnu/libc.so.6+0x7ab8f)
                                      #1 0xffffbd84336f in strstr (/lib/aarch64-linux-gnu/libc.so.6+0x7c36f)

                                      AddressSanitizer can not provide additional info.
                                      SUMMARY: AddressSanitizer: SEGV (/lib/aarch64-linux-gnu/libc.so.6+0x7ab8f) in index
                                      ==25==ABORTING


                                      gdb results:

                                      root@624eedeaed22:/opt/e2# gdb e2 -ex r
                                      GNU gdb (Ubuntu 8.1.1-0ubuntu1) 8.1.1
                                      Copyright (C) 2018 Free Software Foundation, Inc.
                                      License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
                                      This is free software: you are free to change and redistribute it.
                                      There is NO WARRANTY, to the extent permitted by law. Type "show copying"
                                      and "show warranty" for details.
                                      This GDB was configured as "aarch64-linux-gnu".
                                      Type "show configuration" for configuration details.
                                      For bug reporting instructions, please see:
                                      <http://www.gnu.org/software/gdb/bugs/>.
                                      Find the GDB manual and other documentation resources online at:
                                      <http://www.gnu.org/software/gdb/documentation/>.
                                      For help, type "help".
                                      Type "apropos word" to search for commands related to "word"...
                                      Reading symbols from e2...done.
                                      Starting program: /opt/e2/e2
                                      [Thread debugging using libthread_db enabled]
                                      Using host libthread_db library "/lib/aarch64-linux-gnu/libthread_db.so.1".

                                      Program received signal SIGSEGV, Segmentation fault.
                                      strchr () at ../sysdeps/aarch64/strchr.S:102
                                      102 ../sysdeps/aarch64/strchr.S: No such file or directory.

                                      1. A null pointer.. 

                                        Provide the "bt" stack trace as well please. and the output of your current config.conf

                                        Also add the following environment variable before running E2


                                        ASAN_OPTIONS=debug=true


                                        Will play around with the address sanitizer but let's first see if we have a full stack trace.

                                        1. bt trace:

                                          #0 strchr () at ../sysdeps/aarch64/strchr.S:102
                                          #1 0x0000fffff6db4370 in __GI_strstr (haystack=0x0, needle=0xaaaaaae28cf0 "alpha") at strstr.c:84
                                          #2 0x0000fffff7192e28 in strstr () from /usr/lib/aarch64-linux-gnu/libasan.so.4
                                          #3 0x0000aaaaaabb5344 in startPrometheus (sctpParams=...) at /opt/e2/RIC-E2-TERMINATION/sctpThread.cpp:497
                                          #4 0x0000aaaaaabb68a4 in main (argc=1, argv=0xfffffffff6b8) at /opt/e2/RIC-E2-TERMINATION/sctpThread.cpp:562

                                          config.conf:
                                          nano=38000
                                          loglevel=error
                                          volume=log
                                          #The key name of the environment holds the local ip address
                                          #ip address of the E2T in the RMR
                                          local-ip=127.0.0.1
                                          #prometheus mode can be pull or push
                                          prometheusMode=pull
                                          #timeout can be from 5 seconds to 300 seconds default is 10
                                          prometheusPushTimeOut=10
                                          prometheusPushAddr=127.0.0.1:7676
                                          prometheusPort=8088
                                          #trace is start, stop
                                          trace=stop
                                          external-fqdn=e2t.com
                                          #put pointer to the key that point to pod name
                                          pod_name=E2TERM_POD_NAME
                                          sctp-port=36422


                                          1. This is the culprit:


                                            sctpThread.cpp
                                            void startPrometheus(sctp_params_t &sctpParams) {
                                                auto podName = std::getenv("POD_NAME");
                                                string metric = "E2TBeta";
                                                if (strstr(podName, "alpha") != NULL) {
                                                    metric = "E2TAlpha";
                                                }


                                            podName for some reason is null, that's why it segfault with a null pointer. 


                                            config.conf contains pod_name=E2TERM_POD_NAME


                                            E2TERM_POD_NAME is an env variable set in the Dockerfile, can you make sure that variable is set?


                                            There is a sanity check in the code to make sure podName is not null, and should print an error to avoid the null pointer.

                                            Alternatively remove the:

                                            string metric = "E2TBeta";
                                            if (strstr(podName, "alpha") != NULL) {
                                            metric = "E2TAlpha";
                                            }

                                            and just set string metric = "E2TAlpha" bypassing the strstr and recompile to see if it works..


                                            1. Here are all the environment variables from the Dockerfile

                                              WORKDIR /opt/e2/
                                              ENV LD_LIBRARY_PATH=/usr/local/lib
                                              ENV RMR_SEED_RT=dockerRouter.txt
                                              ENV E2TERM_POD_NAME=e2term
                                              ENV LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libasan.so.4
                                              RUN chmod +x /opt/e2/startup.sh /opt/e2/rmr_probe
                                              EXPOSE 38000
                                              CMD ["sh", "-c", "./startup.sh"]

                                              1. Looks correct, it should work just fine. But try to bypass the strstr as mentioned, lets see what happens

                                                1. Rebuilt and deployed with the other components. Same segfault. 

                                                2. But when I started it as a separate docker container - here is what I get

                                                  root@0fdf0bb21c03:/opt/e2# ./e2 -p config -f config.conf
                                                  1633022972165 76/RMR [INFO] ric message routing library on SI95 p=38000 mv=3 flg=00 id=a (4f3e234 4.7.4 built: Sep 1 2021)
                                                  terminate called without an active exception
                                                  1633022975991 76/RMR [INFO] sends: ts=1633022975 src=127.0.0.1:38000 target=10.0.2.15:38000 open=0 succ=0 fail=0 (hard=0 soft=0)
                                                  1633022975991 76/RMR [INFO] sends: ts=1633022975 src=127.0.0.1:38000 target=10.0.2.15:3801 open=0 succ=0 fail=0 (hard=0 soft=0)
                                                  1633022975991 76/RMR [INFO] sends: ts=1633022975 src=127.0.0.1:38000 target=10.0.2.15:4801 open=0 succ=0 fail=0 (hard=0 soft=0)
                                                  1633022975991 76/RMR [INFO] sends: ts=1633022975 src=127.0.0.1:38000 target=10.0.2.15:38010 open=0 succ=0 fail=0 (hard=0 soft=0)

                                                  1. Seems working. Can you try and run it on kubernetes now? When you say deployed with other components you mean on kubernetes?

                                                    1. Yes, I've already deployed to kubernetes and got the same crashloop.

                                                      Can we schedule an online debugging session maybe? 

                                                      1. Send me an email: ferossi@redhat.com we can discuss some timeslots.

    2. when i deploy e-release near-RT RIC, i also met this problem. 

    3. hi, have you solved this error?


  53. Hi All.

    I entered the following:


    # kubectl logs --tail=10 -n ricxapp ricxapp-hwxapp-6d49b695fb-pntbd

    1632279673 8/RMR [INFO] sends: ts=1632279673 src=service-ricxapp-hwxapp-rmr.ricxapp:4560 target=service-ricplt-a1mediator-rmr.ricplt:4562 open=1 succ=1 fail=0 (hard=0 soft=0)
    1632279673 8/RMR [INFO] sends: ts=1632279673 src=service-ricxapp-hwxapp-rmr.ricxapp:4560 target=service-ricplt-submgr-rmr.ricplt:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
    1632279673 8/RMR [INFO] sends: ts=1632279673 src=service-ricxapp-hwxapp-rmr.ricxapp:4560 target=service-ricplt-e2term-rmr-alpha.ricplt:38000 open=0 succ=0 fail=0 (hard=0 soft=0)
    1632279673 8/RMR [INFO] sends: ts=1632279673 src=service-ricxapp-hwxapp-rmr.ricxapp:4560 target=service-ricplt-e2mgr-rmr.ricplt:3801 open=0 succ=0 fail=0 (hard=0 soft=0)


    What does this mean → open=1 succ=1 fail=0 (hard=0 soft=0)

    I searched on this site but couldn't find the answer.


    Thank you!

    1. RMR is a peer to peer library that allow to send messages to other applications, as well as providing endpoint selection based on message type. RMR gets loaded with the initial endpoint route table and that's what you see right there, sending messages to the endpoint. The actual message transport system is based on NNG (Nanomsg). The output you see right there is to check the endpoint status.


      open: Socket connection successfully open (EPSC_GOOD)

      succ: Successful connection + transaction (message)    (EPSC_FAIL)

      fail: Failure at both connection and transaction level  (EPSC_FAIL + EPSC_TRANS)

      fail (hard): Hard failure at connection level (destination port down? timeout connection?)  (EPSC_FAIL)

      fail (soft): Soft failure at transaction level (connected, message sent but no answer, error reply? timeout?) (EPSC_TRANS)

  54. Hi,
    do anyone deployed successfully KPIMON xapp on DAWN release and don't get errors like this?

    {"ts":1632817538493,"crit":"INFO","id":"kpimon","mdc":{"time":"2021-09-28T08:25:38"},"msg":"rmrClient: Waiting for RMR to be ready ..."}
    1632817543 1/RMR [INFO] endpoint: no counts: empty table

    xApps deployed, no pods errors on nrtRIC and SMO. 

    ricxapp-ad-68c7f449f4-hd4tt 1/1 Running 7 13d
    ricxapp-bouncer-xapp-7fff85f456-cnmh6 1/1 Running 7 14d
    ricxapp-hwxapp-6dfdb4fc7d-vftrj 1/1 Running 7 13d
    ricxapp-qp-867d9558c-gs2ck 1/1 Running 7 14d
    ricxapp-qpdriver-6b89bb66c-qkn6l 1/1 Running 7 14d
    ricxapp-trafficxapp-659b5fb754-zp8xn 1/1 Running 7 14d
    xappkpimon-77bc549d57-l4pks 1/1 Running 0 17h

    1. It's not an error it simply means there is no endpoints defined for RMR. If you look at the code https://gerrit.o-ran-sc.org/r/scp/ric-app/kpimon

      The Dockerfile is copying the routes.txt file under /opt/routes.txt. Routes.txt is empty. An initial RT will be the following:


      routes.txt
      newrt|start
      rte|20011|service-ricplt-a1mediator-rmr.ricplt:4562
      rte|12010|service-ricplt-e2term-rmr-alpha.ricplt:38000
      newrt|end
      
      


      You can use RMR_SEED_RT env variable to specify a different file. There is different ways to do that but looks like you are better off building the container yourself and change xapps description to point to your registry/container. One way is to edit the deployment and map /opt/routes.txt to a configmap.

      In addition you need to add gNBs list as env variable in the xapp descriptor json as well, for example:


      "appenv": { "ranList":"gNB1,gNB2" },
      
      "messaging": { 




      1. thank you Sir (smile)
        i will try to deal with it with your advice.

      2. Hi Federico Rossi,
        with your instructions, routes are described and added to /opt/ location but still not working.
        Logs below are captured right before rollout restart deployment command, so routes are inserted correctly from Dockerfile.
        Can you give me more instructions to get it working with data from e2sim e.g. ?
        i will be really grateful.

        root@xappkpimon:/opt# ls
        config-file.json kpimon.log logs routes.txt
        root@xappkpimon:/opt# cat routes.txt
        newrt|start
        rte|20011|service-ricplt-a1mediator-rmr.ricplt:4562
        rte|12010|service-ricplt-e2term-rmr-alpha.ricplt:38000
        newrt|end

        root@nr-ric:~/dep/xapps/kpimon# kubectl get logs -n ricxapp xappkpimon-c4cc66449-2rw8z
        {"ts":1632928986587,"crit":"INFO","id":"kpimon","mdc":{"time":"2021-09-29T15:23:06"},"msg":"rmrClient: Waiting for RMR to be ready ..."}
        1632928995 1/RMR [INFO] endpoint: no counts: empty table

        1. I want to check something, run this in the container

          echo $RMR_SEED_RT

          1. Federico Rossi
            Look like variable is empty (sad)

            root@xappkpimon:/go/src/gerrit.o-ran-sc.org/r/scp/ric-app/kpimon# echo $RMR_SEED_RT

            root@xappkpimon:/go/src/gerrit.o-ran-sc.org/r/scp/ric-app/kpimon#


            I tried this method but it looks like it is not correct way to fix:

            root@xappkpimon:/opt# export RMR_SEED_RT=$(cat routes.txt)
            root@xappkpimon:/opt# echo $RMR_SEED_RT
            newrt|start rte|20011|service-ricplt-a1mediator-rmr.ricplt:4562 rte|12010|service-ricplt-e2term-rmr-alpha.ricplt:38000 newrt|end

            1. To solve this, I added some lines to the Dockerfile

              On the line:

              FROM ubuntu:18.04

              Add this below:
              ENV PATH $PATH:/usr/local/bin
              ENV GOPATH /go
              ENV GOBIN /go/bin
              ENV RMR_SEED_RT /opt/routes.txt
              COPY routes.txt /opt/routes.txt

  55. Hi Federico Rossi


    I installed kpimon xapp but get a crashloopbackoff in the pod. 

    1. I cloned "https://gerrit.o-ran-sc.org/r/scp/ric-app/kpimon"
    2. Onboarded the xapp with dms_cli using the xapp-descriptor/config.json and a generic schema file
    3. installed it with dms_cli and get the CrashLoopBackOff. 

    I tried with version 1.0.0 and 1.0.1. Am I doing something wrong? the logs are empty. 


    Thanks!

    1. I solved this by adding this line to the end of the Dockerfile

      CMD ./kpimon -f /go/src/gerrit.o-ran-sc.org/r/ric-plt/xapp-frame/config/config-file.yaml

      1. I added CMD ./kpimon -f /go/src/gerrit.o-ran-sc.org/r/ric-plt/xapp-frame/config/config-file.yaml to the end of the Dockerfile and installed the kpimon xapp but the status of the app is still CrashLoopBackOff status.

        NAME READY STATUS RESTARTS AGE
        ricxapp-hw-python-64b5447dcc-vpwxs 0/1 Running 4560 14d
        ricxapp-sample-app-57878bfd95-nnwqn 1/1 Running 1 24h
        ricxapp-trafficxapp-659b5fb754-2pmpb 1/1 Running 6 7d2h
        ricxapp-xappkpimon-7ccb4b7c67-m5kbl 0/1 CrashLoopBackOff 19 66m

  56. Please explain how A1 Mediator will be able to POST Feedback Notification to Non-RT RIC as defined in specification.

    1) Need some more information on the notifcationDestination that NON-RT RIC will send. The Specification mentions about query parameter in PUT request,

    however it is not clear whether it will be sent in schema or defined as API. 

    2) Require details of PolicyStatusObject that will be sent as part of Feedback Policy Status. Currently, the openapi.yaml does not define the same.

    1. Hi Ketan ParikhI also want to connect this near-RT platform with NON-RT- RIC platform using A1 mediator. please let me know if you are able to setup communication between Near-RT RIC with NON-RT RIC.  

  57. I am not able to install kpimon Xapp. I am getting the following error 

    ricxapp ricxapp-xappkpimon-67779646d9-nbk9k 0/1 CrashLoopBackOff 7 14m


    Logs - 

    The following is the error I am getting for kubectl describe command - 

    Warning FailedCreatePodSandBox 63s kubelet, instance-1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2d2f7e9c31899ba825ff3b8568625c7be28fcc5c7bac45652489b7d8d27ad594" network for pod "ricxapp-xappkpimon-67779646d9-nbk9k": networkPlugin cni failed to set up pod "ricxapp-xappkpimon-67779646d9-nbk9k_ricxapp" network: open /run/flannel/subnet.env: no such file or directory
    Warning FailedCreatePodSandBox 59s kubelet, instance-1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6cda90607928afaea99b6b2175b085e5ec82e146f7fd86a47f411893e20699ac" network for pod "ricxapp-xappkpimon-67779646d9-nbk9k": networkPlugin cni failed to set up pod "ricxapp-xappkpimon-67779646d9-nbk9k_ricxapp" network: open /run/flannel/subnet.env: no such file or directory
    Normal SandboxChanged 57s (x3 over 65s) kubelet, instance-1 Pod sandbox changed, it will be killed and re-created.
    Normal Pulled 45s kubelet, instance-1 Container image "nexus3.o-ran-sc.org:10002/ric-app-kpimon:1.0.0" already present on machine
    Normal Created 40s kubelet, instance-1 Created container xappkpimon
    Normal Started 39s kubelet, instance-1 Started container xappkpimon
    Warning BackOff 37s kubelet, instance-1 Back-off restarting failed container

  58. E2Term on arm64 issue 

    1. Segmentation fault. 

      This issue was solved thanks to Federico Rossi comment above. 
      In the sctpThread.cpp file there is the following check for podName:

      void startPrometheus(sctp_params_t &sctpParams) {
          auto podName = std::getenv("POD_NAME");
          string metric = "E2TBeta";
          if (strstr(podName, "alpha") != NULL) {
              metric = "E2TAlpha";
          }


      Changed it to (including removing the actual check): 

      void startPrometheus(sctp_params_t &sctpParams) {
          auto podName = std::getenv("POD_NAME");
           string metric = "E2TAlpha";

      Note: you'll have to rebuild the image in order to apply this change.

    2. E2term pod health and readiness


      After deploying the updated e2term pod, here are the logs I get:

      root@ip-10-0-0-113:/home/ubuntu/ric-plt-e2/RIC-E2-TERMINATION# kubectl logs -n ricplt deployment-ricplt-e2term-alpha-5dfccf8ccc-lv8j8
      1633526247604 81/RMR [INFO] ric message routing library on SI95 p=38000 mv=3 flg=00 id=a (4f3e234 4.7.4 built: Sep 1 2021)
      terminate called without an active exception
      1633526252444 81/RMR [INFO] sends: ts=1633526252 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-admission-ctrl-xapp-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
      1633526252444 81/RMR [INFO] sends: ts=1633526252 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-rsm-rmr.ricplt:4801 open=0 succ=0 fail=0 (hard=0 soft=0)
      1633526252444 81/RMR [INFO] sends: ts=1633526252 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-e2mgr-rmr.ricplt:3801 open=0 succ=0 fail=0 (hard=0 soft=0)
      1633526252444 81/RMR [INFO] sends: ts=1633526252 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricxapp-ueec-rmr.ricxapp:4560 open=0 succ=0 fail=0 (hard=0 soft=0)
      1633526252444 81/RMR [INFO] sends: ts=1633526252 src=service-ricplt-e2term-rmr-alpha.ricplt:38000 target=service-ricplt-a1mediator-rmr.ricplt:4562 open=0 succ=0 fail=0 (hard=0 soft=0) 


      But the pod itself is indicated as not ready:

      ricplt       deployment-ricplt-e2term-alpha-5dfccf8ccc-lv8j8       0/1       Running       9       18m

      I've read a few comments far above this post and changed the it-dep/ric-dep/helm/e2term/values.yaml with the following health part:

      health:
      liveness:
      command: "/opt/e2/rmr_probe -h 0.0.0.0:38000 -t 30"
      failureThreshold: 3
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 30
      enabled: true

      readiness:
      command: "/opt/e2/rmr_probe -h 0.0.0.0:38000 -t 30"
      failureThreshold: 3
      initialDelaySeconds: 20
      periodSeconds: 60
      successThreshold: 1
      timeoutSeconds: 30
      enabled: true


      My pod is still crushing after 10-15 minutes though.

      Any suggestions gentlemen? 

      Thanks.

    1. Hi Viktor, the sctpThread.cpp is a workaround and not a fix at the moment, we still don't know why that strstr fails when compiled for ARM.

      Anyway to troubleshoot why the probe fails:

      1 - Edit the deployment for e2term  kubectl edit deployment deployment-ricplt-e2term-alpha -n ricplt 

      2 - Remove the readiness and liveness probe

      3 - Once pod is running kubectl exec -ti podname -n ricplt /bin/bash

      4 - Run manually the probe /opt/e2/rmr_probe -h 0.0.0.0:38000 -t 30   add -v for verbose, adjust -t as needed until it doesn't fail anymore


      Once you perform several tests let me know the results.

  59. Ric-plt-o1 new cross-platform issue 

    I was building O1 on both amd64 and arm64 this morning and found the following issue. 
    In the Dockerfile we do sed on common.h.in :

    RUN \
            cd /opt/dev && \
            git clone https://github.com/sysrepo/sysrepo.git && \
            cd sysrepo && sed -i -e 's/2000/30000/g;s/5000/30000/g' src/common.h.in && \
            mkdir build && cd build && \
            cmake -DCMAKE_BUILD_TYPE:String="Release" -DENABLE_TESTS=OFF -DREPOSITORY_LOC:PATH=/etc/sysrepo .. && \
            make -j2 && \
            make install && make sr_clean && \
            ldconfig

    The issue is - there is no such file in sysrepo anymore. 

    I've tried to work around it by removing common.h.in-related command, but here is what I got  during testing phase:

    Step 32/57 : RUN /usr/local/bin/sysrepoctl -i /go/src/ws/agent/yang/o-ran-sc-ric-ueec-config-v1.yang
    ---> Running in 2e917349cfe1
    [ERR] Internal error (/opt/dev/sysrepo/src/lyd_mods.c:1239).
    [ERR] Failed to update data for the new context.
    sysrepoctl error: Failed to connect (Operation failed)
    For more details you may try to increase the verbosity up to "-v3".
    The command '/bin/sh -c /usr/local/bin/sysrepoctl -i /go/src/ws/agent/yang/o-ran-sc-ric-ueec-config-v1.yang' returned a non-zero code: 1




    1. Replace in the Dockerfile common.h.in with common.h, the sed simply adjust some hardcoded timeouts.

      1. Here is the workaround for o1 issues:

        1) use sysrepo version 2.0.41:

        # sysrepo
        RUN \
        cd /opt/dev && \
        git clone https://github.com/sysrepo/sysrepo.git && \
        cd sysrepo && git checkout tags/v2.0.41 && \
        sed -i -e 's/2000/30000/g;s/5000/30000/g' src/common.h.in && \
        mkdir build && cd build && \
        cmake -DCMAKE_BUILD_TYPE:String="Release" -DENABLE_TESTS=OFF -DREPOSITORY_LOC:PATH=/etc/sysrepo .. && \
        make -j2 && \
        make install && make sr_clean && \
        ldconfig


        2) use netopeer2 version 2.0.28

        # netopeer2
        RUN \
        cd /opt/dev && \
        git clone https://github.com/CESNET/Netopeer2.git && \
        cd Netopeer2 && git checkout tags/v2.0.28 && \
        mkdir build && cd build && \
        cmake -DCMAKE_BUILD_TYPE:String="Release" -DNP2SRV_DATA_CHANGE_TIMEOUT=30000 -DNP2SRV_DATA_CHANGE_WAIT=ON .. && \
        make -j2 && \
        make install


  60. Hello Federico Rossi Scott Daniels,

    I'm having issues with pushing changes (via git review) to master.  I'm trying to stream up a few changes to different repos to add arm64-based builds.


    fatal: Upload denied for project 'it/dev'
    fatal: Could not read from remote repository.

    Please make sure you have the correct access rights
    and the repository exists.

    My ssh is added to the account. I have no issues cloning but can not push anything.

    Thanks.

  61. RIC-PLT-SUBMGR

    I'm building the latest version of submgr on arm64-based instance.

    Faced this issue while building the docker image:

    # gerrit.o-ran-sc.org/r/ric-plt/xapp-frame/pkg/xapp
    /root/go/pkg/mod/gerrit.o-ran-sc.org/r/ric-plt/xapp-frame.git@v0.9.3/pkg/xapp/xapp.go:42:2: undefined: testing.Init


    Here is the code from xapp.go:

    // For testing purpose go version 1.13 ->

    var _ = func() bool {
    testing.Init()
    return true
    }()


    Can it be commented out (ignored), assuming I'm using go1.12?

    Thanks.


  62. Hello all,

    After the installation I have noticed that "deployment-ricplt-rtmgr-9d4847788-n7gb8" is crashing and restarting. Inside the log I see  the following error:

    {"ts":1636636525319,"crit":"ERROR","id":"rtmgr","mdc":{"time":"2021-11-11T13:15:25"},"msg":"Failed to initialize nbi due to: Get http://service-ricplt-appmgr-
    http:8080/ric/v1/xapps: net/http: request canceled (Client.Timeout exceeded while awaiting headers)"}

    Does anybody know how to fix this?
    Thank you in advance!

    BR,
    Atanas 

    1. Hi.


      This is probably due to the fact that the local Chart Repository has not been built.

      I was able to solve this problem by doing the following workaround:

      1. Create a manifest (e.g. `deployment-ricplt-xapp-onboarder-http.yaml`) to deploy `chartmuseum`.
      2. Create a `ClusterIP` manifest (e.g. `service-ricplt-xapp-onboarder-http.yaml`) that listens on `service-ricplt-appmgr-http:8080`.

      There is probably a better way to do this, but it was sufficient as a temporary fix.


      (this is translated with www.DeepL.com/Translator (free version))

  63. Hello!

    E2term craches. I'm seeing these logs from using different versions of rmr (4.6.1 and 4.7.4).

    Starting i: 1021
    Starting i: 1022
    Starting i: 1023
    Starting block7
    Starting block8
    [INFO] listen port: 43830; sending 1 messages
    1638804201716 69/RMR [INFO] ric message routing library on SI95 p=43830 mv=3 flg=01 (71df2a2 4.6.1 built: Mar 11 2021)
    [INFO] RMR initialised
    [INFO] starting session with 0.0.0.0, starting to send
    [FAIL] unable to connect to 0.0.0.0

    Any advice?

  64. Hi Team,

    does above steps work for ubuntu 20.04 too? as I am getting 'Unsupported Ubuntu release (20.04) detected.' error during step 3. or do I need to used only ubuntu 18.04.

    1. Anonymous

      Hi rajesh,

      use ubuntu 18.04 version as specified in the document 

  65. Hi Team,

    Is there any way to skip step 3 as that require restart of VM. is it possible to install Kubernetes, Helm, Docker, etc manually and directly execute step 4. or is there any way to directly run docker containers for all the platform components? Thanks 

    1. you can install helm the link below

      https://phoenixnap.com/kb/install-helm
      just follow the steps and change the helm version you are using.
      or go to 

      cd dep/tools/k8s/etc
      nano infra.cc
      to see helm version to install, I'm using 2.17.0 fro both SMO and Near RT RIC , for docker file leave "" empyty, use kubernestes exact version. could try this.


  66. Hello all,

    I was trying to install Near Realtime RIC, and all worked until step 4, deploying RIC with recipe.

    The error was:

    Error: could not find a ready tiller pod


    The pods were:

    NAMESPACE NAME READY STATUS RESTARTS AGE

    kube-system coredns-5644d7b6d9-cv7vh 1/1 Running 3 2d8h

    kube-system coredns-5644d7b6d9-wf99b 1/1 Running 3 2d8h

    kube-system etcd-ubuntu 1/1 Running 3 2d8h

    kube-system kube-apiserver-ubuntu 1/1 Running 3 2d8h

    kube-system kube-controller-manager-ubuntu 1/1 Running 3 2d8h

    kube-system kube-flannel-ds-8pqbl 1/1 Running 3 2d8h

    kube-system kube-proxy-qnbxt 1/1 Running 3 2d8h

    kube-system kube-scheduler-ubuntu 1/1 Running 3 2d8h

    kube-system tiller-deploy-7bbd5fb7-9jzlx 0/1 ImagePullBackOff 0 106m

     I couldn activate tiller pod. Do you know why?

    Thank you



    1. Hi Tomas, I am also facing same issue and not able to resolve so far. can someone in community please help ?

    2. Hi @Tomas J. Cantador could you please tell me which branch you have cloned?
      if it is bronze , tiller will try to fetch images based helm v2.12.3 which are no longer available in repo.

      I suggest you to clone Master branch which have new k8's script will install helmv2.17.0 and it worked for me.

      hope this will help. 

  67. THANK YOU VERY MUCH CHANDRA


    IT WORKED CHANGING HELM TO v2.17.0!!!!!

    THANKS

  68. Anonymous

    Hi Team,

    I've run steps 1-4 on the guide here. But, when I run kubectl get pods -n ricplt, I only have 13 pods running. Comparing the output with others posted in the comments, I notice that jaegeradapter and xapp-onboarder are not deployed. 

    deployment-ricplt-a1mediator-9fc67867-vmrfg 1/1 Running 0 58m
    deployment-ricplt-alarmmanager-7685b476c8-ndpt8 1/1 Running 0 57m
    deployment-ricplt-appmgr-6fd6664755-pvndg 1/1 Running 0 59m
    deployment-ricplt-e2mgr-66cdc4d6b6-5k55n 1/1 Running 0 58m
    deployment-ricplt-e2term-alpha-7cbc5f4687-sbnvj 1/1 Running 0 58m
    deployment-ricplt-o1mediator-d8b9fcdf-rsp9l 1/1 Running 0 57m
    deployment-ricplt-rtmgr-6d559897d8-hjvgz 1/1 Running 3 59m
    deployment-ricplt-submgr-65bcd95469-vcm8w 1/1 Running 0 58m
    deployment-ricplt-vespamgr-7458d9b5d-m9vmk 1/1 Running 0 57m
    r4-infrastructure-kong-646b68bd88-hk7qj 2/2 Running 2 60m
    r4-infrastructure-prometheus-alertmanager-75dff54776-brq29 2/2 Running 0 60m
    r4-infrastructure-prometheus-server-5fd7695-zlgdv 1/1 Running 0 60m
    statefulset-ricplt-dbaas-server-0 1/1 Running 0 59m

    Does anyone know how to deploy the remaining 2 pods? I think the missing xapp-onboarder is also the reason why I get {"message":"no Route matched with those values"} when I run the following command from Step 5: 

    curl --location --request POST "http://$(hostname):32080/onboard/api/v1/onboard/download"  --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"

    1. Anonymous

      Hello,

      That also happened to me. Do you manage to fix it?

      Thank you.

      1. If you are using the D or E release, please use the dms_cli utility to onboard xApps. Search here in the wiki and you can find more about it.

        I can see the xapp-onboarder part of the recipe but it doesn't get deployed automatically for some reason.


        To manually deploy it you can do the following:


        cd ric-dep/helm/xapp-onboarder

        helm dependency build

        helm install xapp-onboarder . -n ricplt


        1. Anonymous

          This command is not working (helm install xapp-onboarder . -n ricplt)

          The answer is: 

          Error: This command needs 1 argument: chart name

          What should I do?

          1. Are you installing D or E release? Are you using helm 3?

            # helm versionversion.BuildInfo{Version:"v3.8.0", GitCommit:"d14138609b01886f544b2025f5000351c9eb092e", GitTreeState:"clean", GoVersion:"go1.17.5"}

            Make sure you run the install command in the correct directory

            # ls
            charts Chart.yaml requirements.lock requirements.yaml templates values.yaml

            # helm install xapp-onboarder . -n ricplt
            NAME: xapp-onboarder
            LAST DEPLOYED: Tue Feb 22 13:25:30 2022
            NAMESPACE: ricplt
            STATUS: deployed
            REVISION: 1
            TEST SUITE: None


            The name is xapp-onboarder for the chart, but you can also try with:


            # helm install --generate-name . -n ricplt

            NAME: chart-1645627236
            LAST DEPLOYED: Wed Feb 23 09:40:38 2022
            NAMESPACE: ricplt
            STATUS: deployed
            REVISION: 1
            TEST SUITE: None

            To uninstall then:

            #helm uninstall chart-1645627236 -n ricplt




            1. Hi Federico, 

              I change to helm3 and follow your instructions, and it show "ImagePullBackOff" for xapp-onboarder.

              One thing I want to mention is after I change to helm3, I do the command "helm repo add local http://127.0.0.1:8879/charts".

              Did I do the wrong step?

        2. Anonymous

          Hi Federico,


          I want to use dms_cli to onboard xApps but I can't find schema for ad xApp and qp xApp. 
          Where can I find them?

          1. Here is a trick, use a default schema available on appmgr:


            git clone "https://gerrit.o-ran-sc.org/r/ric-plt/appmgr"

            cd appmgr/xapp_orchestrater/dev/docs/xapp_onboarder/guide/


            there is a file called embedded-schema.json use that one for AD and QP.

  69. Hi all,
    currently im trying to connect OSC nrtRIC with FlexRIC E2 Agent.

    can someone can help me to get correct IP address of OSC nrtRIC? (host address? E2mgr address? pod address?)
    I need to put correct IP to FlexRIC config to exchange E2 subscription between them. 

    Both working on the same machine so host address is 127.0.0.1 for both.

    1. You need to use the SCTP port of e2term pod:


      # kubectl get svc -n ricplt | grep sctp
      service-ricplt-e2term-sctp-alpha NodePort 172.30.231.94 <none> 36422:32222/SCTP 11d

      It's a NodePort so depending where the FlexRIC E2 agent runs you can either connect to the service IP or the node IP

      If it's the service it will be: 172.30.231.94 port 36422


      For the node in my case:

      #kubectl get pods -n ricplt -o wide | grep e2mgr
      deployment-ricplt-e2mgr-64d8cccc4d-mn4kw 1/1 Running 0 11d 10.130.0.64 node9.mylab <none> <none>


      IP: 10.130.0.64  port: 32222


      Hope this helps.


  70. Hello everyone. When i try to develop a local e2sim to connect e2term by SCTP, the e2term gives a error log. 

    {"ts":1646011735029,"crit":"ERROR","id":"E2Terminator","mdc":{"PID":"139868256446208"},"msg":"epoll error, events 8 on fd 28, RAN NAME : gnb_030_133_303030"}

    the e2sim runtime environment is Ubuntu18.04. 

    the RIC runtime environment is Ubuntu18.04. 

    Has someone met this problem? 

  71. when i run the kpm_sim, i can connect to the  e2term and get right set-up response. But after about 10 seconds, the e2term reset the connect.

    Floowing is kpm_sim log


    Starting KPM processor simJSON Test
    kpm0
    kpm0.9
    kpm2
    kpm3
    ret is 0
    kpm4
    kpm5
    kpm6
    <E2SM-KPM-RANfunction-Description>
    <ranFunction-Name>
    <ranFunction-ShortName>ORAN-E2SM-KPM</ranFunction-ShortName>
    <ranFunction-E2SM-OID>OID123</ranFunction-E2SM-OID>
    <ranFunction-Description>KPM monitor</ranFunction-Description>
    <ranFunction-Instance>1</ranFunction-Instance>
    </ranFunction-Name>
    <e2SM-KPM-RANfunction-Item>
    <ric-EventTriggerStyle-List>
    <RIC-EventTriggerStyle-List>
    <ric-EventTriggerStyle-Type>1</ric-EventTriggerStyle-Type>
    <ric-EventTriggerStyle-Name>Periodic report</ric-EventTriggerStyle-Name>
    <ric-EventTriggerFormat-Type>5</ric-EventTriggerFormat-Type>
    </RIC-EventTriggerStyle-List>
    </ric-EventTriggerStyle-List>
    <ric-ReportStyle-List>
    <RIC-ReportStyle-List>
    <ric-ReportStyle-Type>1</ric-ReportStyle-Type>
    <ric-ReportStyle-Name>O-DU Measurement Container for the 5GC connected deployment</ric-ReportStyle-Name>
    <ric-IndicationHeaderFormat-Type>1</ric-IndicationHeaderFormat-Type>
    <ric-IndicationMessageFormat-Type>1</ric-IndicationMessageFormat-Type>
    </RIC-ReportStyle-List>
    <RIC-ReportStyle-List>
    <ric-ReportStyle-Type>2</ric-ReportStyle-Type>
    <ric-ReportStyle-Name>O-DU Measurement Container for the EPC connected deployment</ric-ReportStyle-Name>
    <ric-IndicationHeaderFormat-Type>1</ric-IndicationHeaderFormat-Type>
    <ric-IndicationMessageFormat-Type>1</ric-IndicationMessageFormat-Type>
    </RIC-ReportStyle-List>
    <RIC-ReportStyle-List>
    <ric-ReportStyle-Type>3</ric-ReportStyle-Type>
    <ric-ReportStyle-Name>O-CU-CP Measurement Container for the 5GC connected deployment</ric-ReportStyle-Name>
    <ric-IndicationHeaderFormat-Type>1</ric-IndicationHeaderFormat-Type>
    <ric-IndicationMessageFormat-Type>1</ric-IndicationMessageFormat-Type>
    </RIC-ReportStyle-List>
    <RIC-ReportStyle-List>
    <ric-ReportStyle-Type>4</ric-ReportStyle-Type>
    <ric-ReportStyle-Name>O-CU-CP Measurement Container for the EPC connected deployment</ric-ReportStyle-Name>
    <ric-IndicationHeaderFormat-Type>1</ric-IndicationHeaderFormat-Type>
    <ric-IndicationMessageFormat-Type>1</ric-IndicationMessageFormat-Type>
    </RIC-ReportStyle-List>
    <RIC-ReportStyle-List>
    <ric-ReportStyle-Type>5</ric-ReportStyle-Type>
    <ric-ReportStyle-Name>O-CU-UP Measurement Container for the 5GC connected deployment</ric-ReportStyle-Name>
    <ric-IndicationHeaderFormat-Type>1</ric-IndicationHeaderFormat-Type>
    <ric-IndicationMessageFormat-Type>1</ric-IndicationMessageFormat-Type>
    </RIC-ReportStyle-List>
    <RIC-ReportStyle-List>
    <ric-ReportStyle-Type>6</ric-ReportStyle-Type>
    <ric-ReportStyle-Name>O-CU-UP Measurement Container for the EPC connected deployment</ric-ReportStyle-Name>
    <ric-IndicationHeaderFormat-Type>1</ric-IndicationHeaderFormat-Type>
    <ric-IndicationMessageFormat-Type>1</ric-IndicationMessageFormat-Type>
    </RIC-ReportStyle-List>
    </ric-ReportStyle-List>
    </e2SM-KPM-RANfunction-Item>
    </E2SM-KPM-RANfunction-Description>
    er encded is 489
    after encoding message
    here is encoded message ᅬRAN-E2SM-KPM
    this is the char array ᅬRAN-E2SM-KPM
    !!!lenth of ranfuncdesc is 15
    value of this index is 32
    value of this index is 192
    value of this index is 79
    value of this index is 82
    value of this index is 65
    value of this index is 78
    value of this index is 45
    value of this index is 77
    value of this index is 0
    value of this index is 32
    value of this index is 102
    %%about to register e2sm func desc for 0
    %%about to register callback for subscription for func_id 0
    Start E2 Agent (E2 Simulator
    After reading input options
    [SCTP] Binding client socket to source port 36422
    [SCTP] Connecting to server at 192.168.191.100:32222 ...
    [SCTP] Connection established
    After starting client
    client_fd value is 3
    looping through ran func
    about to call setup request encode
    After generating e2setup req
    <E2AP-PDU>
    <initiatingMessage>
    <procedureCode>1</procedureCode>
    <criticality><reject/></criticality>
    <value>
    <E2setupRequest>
    <protocolIEs>
    <E2setupRequestIEs>
    <id>3</id>
    <criticality><reject/></criticality>
    <value>
    <GlobalE2node-ID>
    <gNB>
    <global-gNB-ID>
    <plmn-id>37 34 37</plmn-id>
    <gnb-id>
    <gnb-ID>
    10110101110001100111011110001
    </gnb-ID>
    </gnb-id>
    </global-gNB-ID>
    </gNB>
    </GlobalE2node-ID>
    </value>
    </E2setupRequestIEs>
    <E2setupRequestIEs>
    <id>10</id>
    <criticality><reject/></criticality>
    <value>
    <RANfunctions-List>
    <ProtocolIE-SingleContainer>
    <id>8</id>
    <criticality><reject/></criticality>
    <value>
    <RANfunction-Item>
    <ranFunctionID>0</ranFunctionID>
    <ranFunctionDefinition>
    20 C0 4F 52 41 4E 2D 45 32 53 4D 2D 4B 50 4D 00
    00 05 4F 49 44 31 32 33 05 00 4B 50 4D 20 6D 6F
    6E 69 74 6F 72 08 D9 EC 71 16 0D 6B 6D 00 60 00
    01 01 07 00 50 65 72 69 6F 64 69 63 20 72 65 70
    6F 72 74 01 05 14 01 01 1D 00 4F 2D 44 55 20 4D
    65 61 73 75 72 65 6D 65 6E 74 20 43 6F 6E 74 61
    69 6E 65 72 20 66 6F 72 20 74 68 65 20 35 47 43
    20 63 6F 6E 6E 65 63 74 65 64 20 64 65 70 6C 6F
    79 6D 65 6E 74 01 01 01 01 00 01 02 1D 00 4F 2D
    44 55 20 4D 65 61 73 75 72 65 6D 65 6E 74 20 43
    6F 6E 74 61 69 6E 65 72 20 66 6F 72 20 74 68 65
    20 45 50 43 20 63 6F 6E 6E 65 63 74 65 64 20 64
    65 70 6C 6F 79 6D 65 6E 74 01 01 01 01 00 01 03
    1E 80 4F 2D 43 55 2D 43 50 20 4D 65 61 73 75 72
    65 6D 65 6E 74 20 43 6F 6E 74 61 69 6E 65 72 20
    66 6F 72 20 74 68 65 20 35 47 43 20 63 6F 6E 6E
    65 63 74 65 64 20 64 65 70 6C 6F 79 6D 65 6E 74
    01 01 01 01 00 01 04 1E 80 4F 2D 43 55 2D 43 50
    20 4D 65 61 73 75 72 65 6D 65 6E 74 20 43 6F 6E
    74 61 69 6E 65 72 20 66 6F 72 20 74 68 65 20 45
    50 43 20 63 6F 6E 6E 65 63 74 65 64 20 64 65 70
    6C 6F 79 6D 65 6E 74 01 01 01 01 00 01 05 1E 80
    4F 2D 43 55 2D 55 50 20 4D 65 61 73 75 72 65 6D
    65 6E 74 20 43 6F 6E 74 61 69 6E 65 72 20 66 6F
    72 20 74 68 65 20 35 47 43 20 63 6F 6E 6E 65 63
    74 65 64 20 64 65 70 6C 6F 79 6D 65 6E 74 01 01
    01 01 00 01 06 1E 80 4F 2D 43 55 2D 55 50 20 4D
    65 61 73 75 72 65 6D 65 6E 74 20 43 6F 6E 74 61
    69 6E 65 72 20 66 6F 72 20 74 68 65 20 45 50 43
    20 63 6F 6E 6E 65 63 74 65 64 20 64 65 70 6C 6F
    79 6D 65 6E 74 01 01 01 01
    </ranFunctionDefinition>
    <ranFunctionRevision>2</ranFunctionRevision>
    </RANfunction-Item>
    </value>
    </ProtocolIE-SingleContainer>
    </RANfunctions-List>
    </value>
    </E2setupRequestIEs>
    </protocolIEs>
    </E2setupRequest>
    </value>
    </initiatingMessage>
    </E2AP-PDU>
    After XER Encoding
    error length 0
    error buf
    er encded is 529
    in sctp send data func
    data.len is 529after getting sent_len
    [SCTP] Sent E2-SETUP-REQUEST
    [SCTP] Waiting for SCTP data
    receive data1
    receive data2
    receive data3
    [SCTP] Received new data of size 33
    in e2ap_handle_sctp_data()
    decoding...
    full buffer

    length of data 33
    result 0
    index is 2
    showing xer of data
    <E2AP-PDU>
    <successfulOutcome>
    <procedureCode>1</procedureCode>
    <criticality><reject/></criticality>
    <value>
    <E2setupResponse>
    <protocolIEs>
    <E2setupResponseIEs>
    <id>4</id>
    <criticality><reject/></criticality>
    <value>
    <GlobalRIC-ID>
    <pLMN-Identity>13 10 14</pLMN-Identity>
    <ric-ID>
    10101010110011001110
    </ric-ID>
    </GlobalRIC-ID>
    </value>
    </E2setupResponseIEs>
    <E2setupResponseIEs>
    <id>9</id>
    <criticality><reject/></criticality>
    <value>
    <RANfunctionsID-List>
    <ProtocolIE-SingleContainer>
    <id>6</id>
    <criticality><ignore/></criticality>
    <value>
    <RANfunctionID-Item>
    <ranFunctionID>0</ranFunctionID>
    <ranFunctionRevision>2</ranFunctionRevision>
    </RANfunctionID-Item>
    </value>
    </ProtocolIE-SingleContainer>
    </RANfunctionsID-List>
    </value>
    </E2setupResponseIEs>
    </protocolIEs>
    </E2setupResponse>
    </value>
    </successfulOutcome>
    </E2AP-PDU>
    [E2AP] Unpacked E2AP-PDU: index = 2, procedureCode = 1

    [E2AP] Received SETUP-RESPONSE-SUCCESS
    receive data1
    receive data2
    receive data3
    [SCTP] recv: Connection reset by peer


    And the e2term gives an error log.


    {"ts":1646011735029,"crit":"ERROR","id":"E2Terminator","mdc":{"PID":"139868256446208"},"msg":"epoll error, events 8 on fd 28, RAN NAME : gnb_030_133_303030"}



    1. This is a very important question! Did anyone meet this problem? 

      Thank your atterntion!

      1. I don't know what is the reason. But i find another way to avoid the problem. I run this sim in a docker container , then the problem is disappear! Maybe it's the local net problem. i hope someone can figure out this problem.

        1. This is most probably happening because the SCTP connection between the E2 Termination (Near-RT RIC) and the E2 Agent (E2Sim) is not kept alive. You can check in the E2 manager logs if the RAN is getting disassociated right after the E2Response message is sent. Usually, the E2Termination maintains the SCTP connection by periodically sending Heartbeat (HB) messages to the E2 agent at E2Sim. But, if you are using Flannel as your overlay network, the HB messages are not routed properly if you haven't bound the IP address of the E2Sim along with the port to the socket. Flannel won't know where to send this HeartBeat packet, so it sends it to any of the docker containers within the cluster. But since they might not be configured to receive SCTP packets, an ABORT is issued which closes the SCTP connection at the E2 Termination. You may need to perform a 'tcpdump' at the virtual network interface being used by E2 Termination to verify if this is the case. 

          But if you deploy it as a docker container, Flannel is able to route the SCTP packets to the correct destination. So maybe that's why it's working.


          TLDR: Try binding the socket to both IP address and port or you could use Calico instead of Flannel since Calico works at Layer 3 as well and is more powerful. 



          1. Thank your reply. I can understand what you say. I don't konw why the e2sim can give a right set-up response but HB can't route properly. Do they use the same ip route, don't it?

  72. Hi, is it possible to detect an anomaly in running pod by inserting measurements in InfluxDB.  I found that some anomaly log in AD xAPP but just want to know how those anomaly being detected and how can I input measurements to generate another anomaly. Thanks 

  73. Anonymous

    Hello,


    I am trying to set up Near-Real Time RIC Installation, setting up a Test xapp, but reach until step 5 and I cannot do the onboarding of a Test app, I do:

    curl --location --request POST "http://$(hostname):32080/onboard/api/v1/onboard/download"  --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"

    and I cannot do anything more (verification of boarded applications, and step 6), because the command does not finish, doing nothing.


    One question, what I am not doing?, or what is bad? Sorry by the question (it has to be something basic), and thank you. Best regards,


    Tomás

    1. facing same issue while onboarding xapp

  74. Hello everyone.

    I installed RIC  and I did not receive any errors (16 pods were there). I have problems. The problem is that some of the status are shown as "EVICTED".

    I am trying to put everything as running, but i do not know how, I do not know why they are evicted .

    Do you have any idea?

    Thank you

  75. Hello everyone,

    Is there any available e2-sim for testing e2-term function's integrity?

    Thanks!

    1. You can follow the example of TS xApp:

      https://wiki.o-ran-sc.org/display/IAT/Traffic+Steering+Flows

      It includes an E2sim and a KPImon for testing the E2term function.

  76. Anonymous

    Hello everyboy,

    I have a problem. Everythin was ok until deploy RIC using Recipe (Step  4).

    I got all the pods running except  the kong image:



    root@ubuntu:~/dep/bin# kubectl get pods -n ricplt

    NAME READY STATUS RESTARTS AGE

    deployment-ricplt-a1mediator-64694d95f4-bfhtp 1/1 Running 0 6m27s

    deployment-ricplt-alarmadapter-6cb79996ff-68mtl 1/1 Running 0 5m17s

    deployment-ricplt-appmgr-65d4c44c85-fvwkd 1/1 Running 0 7m23s

    deployment-ricplt-e2mgr-7cdfcf984b-rqppc 1/1 Running 0 6m55s

    deployment-ricplt-e2term-alpha-846fcb64cd-vgqqr 1/1 Running 0 6m41s

    deployment-ricplt-jaegeradapter-54f7849876-k7k48 1/1 Running 0 5m45s

    deployment-ricplt-o1mediator-8b6684b7c-vzzkr 1/1 Running 0 5m31s

    deployment-ricplt-rtmgr-8c77886f6-fphj6 1/1 Running 2 7m9s

    deployment-ricplt-submgr-5f8df875f7-9wkgw 1/1 Running 0 6m13s

    deployment-ricplt-vespamgr-589bbb7f7b-k4jfs 1/1 Running 0 5m59s

    deployment-ricplt-xapp-onboarder-56bc4855b-ztppq 2/2 Running 0 7m37s

    r4-infrastructure-kong-86bfd9f7c5-ntn72 1/2 ImagePullBackOff 0 8m5s

    r4-infrastructure-prometheus-alertmanager-54c67c94fd-cxx2b 2/2 Running 0 8m5s

    r4-infrastructure-prometheus-server-6f5c784c4c-x7bw5 1/1 Running 0 8m5s

    statefulset-ricplt-dbaas-server-0 1/1 Running


    Do you know how can I solve this? Regards

  77. Anonymous

    hi,

    I can on board the x apps, but when I try to  deploy always fails saying upstream server is timing out 

    curl --location --request POST "http://localhost:32088/appmgr/ric/v1/xapps" --header 'Content-Type: app
    lication/json' --data-raw '{"xappName": "hwapp"}'
    The upstream server is timing out

    Do you know what can I do?

    Thak you a lot in advance.


  78. Hi,

    I successfully installed Near realtime RIC, but getting error on boarding Xapp'


    #dms_cli onboard --config_file_path=CONFIG_FILE_PATH --shcema_file_path=SCHEMA_FILE_PATH
    

    "error_message": "xApp mcxapp-1.0.2 distribution failed. (Caused by: Upload helm chart failed. Helm repo return status code: 500 {\"error\":\"mkdir /charts: permission denied\"})",
    "error_source": "xApp_builder",
    "status": "xApp onboarding failed"

    Kindly help how to debug the above error!

  79. Anonymous

    shankar,

    I used the page https://blog.csdn.net/jeffyko/article/details/107426626 to on board the applications, and it worked, but I fail ti deploy them. I hope that this solves your problem.

    Regards,


    Tomas

    1. Thanks Tomas for sharing! will tryout these steps.


  80. I tried to follow the install procedure here: https://docs.o-ran-sc.org/projects/o-ran-sc-ric-plt-ric-dep/en/latest/installation-guides.html
    Which boils down to:

    Install Ubuntu 20.04

    git clone "https://gerrit.o-ran-sc.org/r/ric-plt/ric-dep"

    cd ric-dep/bin
    ./install_k8s_and_helm.sh

    I tried this on Ubuntu instances on bare-metal, AWS and GCP. But in all cases, can't bring k8s cluster up. Basically, coredns pods stay in Pending mode:


    $ sudo kubectl get pods -A
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5644d7b6d9-7m22l 0/1 Pending 0 4m45s
    kube-system coredns-5644d7b6d9-p4lnh 0/1 Pending 0 4m45s
    kube-system etcd-mytestgw 1/1 Running 0 3m48s
    kube-system kube-apiserver-mytestgw 1/1 Running 0 3m55s
    kube-system kube-controller-manager-mytestgw 1/1 Running 0 3m57s
    kube-system kube-proxy-778m8 1/1 Running 0 4m45s
    kube-system kube-scheduler-mytestgw 1/1 Running 0 4m6s

    The reason for this is because the node is as because it can't get scheduled because of taints.


    $ sudo kubectl describe pod coredns-5644d7b6d9-7m22l -n kube-system
    Name: coredns-5644d7b6d9-7m22l
    ...
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
    Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.


    Upon looking into the node, it seems like there is an issue with CNI config.  As I started on AWS, I followed some similar online issues and fixes, but nothing worked, So I tried the same on bare metal and GCP as well. Not sure where to go from here... See below for node condition


    KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady \

    message:docker: network plugin is not ready: cni config uninitialized

    kubectl describe node -n kube-system
    Name:               mytestgw
    Roles:              master
    Labels:             beta.kubernetes.io/arch=amd64
                        beta.kubernetes.io/os=linux
                        kubernetes.io/arch=amd64
                        kubernetes.io/hostname=mygw
                        kubernetes.io/os=linux
                        node-role.kubernetes.io/master=
    Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                        node.alpha.kubernetes.io/ttl: 0
                        volumes.kubernetes.io/controller-managed-attach-detach: true
    CreationTimestamp:  Mon, 18 Jul 2022 15:28:38 -0400
    Taints:             node-role.kubernetes.io/master:NoSchedule
                        node.kubernetes.io/not-ready:NoSchedule
    Unschedulable:      false
    Conditions:
      Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
      ----             ------  -----------------                 ------------------                ------                       -------
      MemoryPressure   False   Mon, 18 Jul 2022 15:41:41 -0400   Mon, 18 Jul 2022 15:28:35 -0400   KubeletHasSufficientMemory   kubelet has sufficient memory available
      DiskPressure     False   Mon, 18 Jul 2022 15:41:41 -0400   Mon, 18 Jul 2022 15:28:35 -0400   KubeletHasNoDiskPressure     kubelet has no disk pressure
      PIDPressure      False   Mon, 18 Jul 2022 15:41:41 -0400   Mon, 18 Jul 2022 15:28:35 -0400   KubeletHasSufficientPID      kubelet has sufficient PID available
      Ready            False   Mon, 18 Jul 2022 15:41:41 -0400   Mon, 18 Jul 2022 15:28:35 -0400   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
    Addresses:
      InternalIP:  192.168.2.70
      Hostname:    mytestgw
    Capacity:
     cpu:                4
     ephemeral-storage:  56907632Ki
     hugepages-1Gi:      0
     hugepages-2Mi:      0
     memory:             3905540Ki
     pods:               110
    Allocatable:
     cpu:                4
     ephemeral-storage:  52446073565
     hugepages-1Gi:      0
     hugepages-2Mi:      0
     memory:             3803140Ki
     pods:               110
    System Info:
     Machine ID:                 53f1d95d4b5d4e1f94d2cd8b3a590079
     System UUID:                1273ee80-e297-957f-0000-000000000000
     Boot ID:                    65e9d7dd-7d2f-44d9-a0be-b99b84e10b9f
     Kernel Version:             5.4.0-100-generic
     OS Image:                   Ubuntu 20.04.3 LTS
     Operating System:           linux
     Architecture:               amd64
     Container Runtime Version:  docker://20.10.12
     Kubelet Version:            v1.16.0
     Kube-Proxy Version:         v1.16.0
    PodCIDR:                     10.244.0.0/24
    PodCIDRs:                    10.244.0.0/24
    Non-terminated Pods:         (5 in total)
      Namespace                  Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
      ---------                  ----                                ------------  ----------  ---------------  -------------  ---
      kube-system                etcd-mytestgw                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
      kube-system                kube-apiserver-mytestgw             250m (6%)     0 (0%)      0 (0%)           0 (0%)         12m
      kube-system                kube-controller-manager-mytestgw    200m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
      kube-system                kube-proxy-778m8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
      kube-system                kube-scheduler-mytestgw             100m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
    Allocated resources:
      (Total limits may be over 100 percent, i.e., overcommitted.)
      Resource           Requests    Limits
      --------           --------    ------
      cpu                550m (13%)  0 (0%)
      memory             0 (0%)      0 (0%)
      ephemeral-storage  0 (0%)      0 (0%)
    Events:
      Type    Reason                   Age                From                  Message
      ----    ------                   ----               ----                  -------
      Normal  NodeHasSufficientMemory  13m (x7 over 13m)  kubelet, mytestgw     Node mytestgw status is now: NodeHasSufficientMemory
      Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet, mytestgw     Node mytestgw status is now: NodeHasNoDiskPressure
      Normal  NodeHasSufficientPID     13m (x8 over 13m)  kubelet, mytestgw     Node mytestgw status is now: NodeHasSufficientPID
      Normal  NodeAllocatableEnforced  13m                kubelet, mytestgw     Updated Node Allocatable limit across pods
      Normal  Starting                 13m                kube-proxy, mytestgw  Starting kube-proxy.


    1. I was able to get around to this by changing the Kubernetes version to 1.18.0 ("install_k8s_and_helm.sh -k 1.18.0") and changing one line in the file install_k8s_and_helm.sh because apparently there is one less system pod in the new version.

           wait_for_pods_running 7 kube-system  (instead of     wait_for_pods_running 8 kube-system)

      After that, helm charts etc installs with one issue on tiller pod - which has a workaround described in earlier comments:

      kubectl edit deploy deployment-tiller-ricxapp -n ricinfra

      change from:
      image: gcr.io/kubernetes-helm/tiller:v2.12.3

      image: ghcr.io/helm/tiller:v2.17.0


      To deploy xapps, one has to follow dms_cli (not the methods described here). Following worked for me:

      sudo apt install python3-pip
      git clone https://github.com/o-ran-sc/ric-app-hw
      git clone https://github.com/o-ran-sc/ric-app-qp-driver
      git clone "https://gerrit.o-ran-sc.org/r/ric-plt/appmgr"
      cd appmgr/xapp_orchestrater/dev/xapp_onboarder
      sudo pip3 install ./
      
      sudo -i
      #### ALL below as root
      docker run --rm -u 0 -it -d -p 8090:8080 -e DEBUG=1 -e STORAGE=local -e STORAGE_LOCAL_ROOTDIR=/charts -v $(pwd)/charts:/charts chartmuseum/chartmuseum:latest
      export CHART_REPO_URL=http://0.0.0.0:8090
      
      dms_cli onboard /home/ubuntu/ric-app-hw/init/config-file.json /home/ubuntu/ric-app-hw/init/schema.json
      
      curl -X GET http://localhost:8090/api/charts | jq .
      dms_cli install hwxapp 1.0.0 ricxapp
      kubectl get pods -n ricxapp


      Hope this helps.

      1. Hi Jatin Desai,
        everything you mentioned working fine. Thanks for that

        curl -X GET http://localhost:8090/api/charts | jq .
        % Total % Received % Xferd Average Speed Time Time Time Current
        Dload Upload Total Spent Left Speed
        100 817 100 817 0 0 159k 0 --:--:-- --:--:-- --:--:-- 159k
        {
        "hwxapp": [
        {
        "name": "hwxapp",
        "version": "1.0.0",
        "description": "Standard xApp Helm Chart",
        "apiVersion": "v1",
        "appVersion": "1.0",
        "urls": [
        "charts/hwxapp-1.0.0.tgz"
        ],
        "created": "2022-07-25T03:29:17.078942482Z",
        "digest": "b4603e72132db2c01fef71ab82da80a6f3a01df0c532184ad87222155087cffa"
        }
        ],
        "ueec": [
        {
        "name": "ueec",
        "version": "2.0.0",
        "description": "Standard xApp Helm Chart",
        "apiVersion": "v1",
        "appVersion": "1.0",
        "urls": [
        "charts/ueec-2.0.0.tgz"
        ],
        "created": "2020-11-18T05:42:18.708541492Z",
        "digest": "cea7ed0a26dea4e24f44e8f281c5e4ad22029ba6ff891eededc8bba1d4cd2ee6"
        },
        {
        "name": "ueec",
        "version": "1.0.0",
        "description": "Standard xApp Helm Chart",
        "apiVersion": "v1",
        "appVersion": "1.0",
        "urls": [
        "charts/ueec-1.0.0.tgz"
        ],
        "created": "2020-11-18T05:31:16.586130707Z",
        "digest": "f7be73dfcd8962eaeda5a042f484fae7d9ddf164d6fa129679c18006105888df"
        }
        ]
        }

        But while installing hwxapp we are getting error like below.

        dms_cli install hwxapp 1.0.0 ricxapp
        Error: This command needs 1 argument: chart name

        status: NOT_OK

        if you have any suggestion please let us know.

        Thanks a lot

        1. I am facing the same issue. 

          Any resolution for this issue ?

  81. Hi,

    I have installed RIC and all the pods are up and running. I also built e2sim docker as per https://wiki.o-ran-sc.org/display/SIM/E2+Simulator#E2Simulator-Buildingdockerimageandrunningsimulatorinstance .

    But when I run e2sim docker, e2sim sends request to e2term but e2term fails to decode the msg with following log.

    {"ts":1659596076910,"crit":"ERROR","id":"E2Terminator","mdc":{"PID":"140350429570816","POD_NAME":"deployment-ricplt-e2term-alpha-7f64b9cc44-k7k6v","CONTAINER_NAME":"container-ricplt-e2term","SERVICE_NAME":"RIC_E2_TERM","HOST_NAME":"oran","SYSTEM_NAME":"SEP"},"msg":"Error 2 Decoding (unpack) E2AP PDU from RAN : "}

    The version of e2term is : ric-plt-e2:5.5.0 and e2 simulator is built with latest code.

    I think this may be because of version mismatch of E2AP (v1 and v2). Can someone help me with this?

    Thanks,
    Himanshu

    1. Hi,


      I am currently running into this same issue. Have you gotten past this stage? I would really appreciate if you shared your solution.


      Thanks,

      Santeri

  82. Anonymous

    Hi..

    ./install -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe_latest_stable.yaml
    ****************************************************************************************************************
    ERROR
    ****************************************************************************************************************
    Can't locate the ric-common helm package in the local repo. Please make sure that it is properly installed.
    ****************************************************************************************************************

    I was getting the above mentioned error while deploying RIC and I tried what Travis Machacek suggested but unable to fix this issue.. So can anyone help on this!

    1. Maybe a good start is to check the F release demo video and try repeating the steps: 2022-05-24 Release F

  83. Anonymous

    Hello,

    After running this command I am facing with an error

    ./install_common_templates_to_helm.sh

    Error: plugin already exists
    /root/.cache/helm/repository
    nohup: appending output to 'nohup.out'
    servcm up and running
    /root/.cache/helm/repository
    Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz
    Error: no repo named "local" found
    Error: looks like "http://127.0.0.1:8879/charts" is not a valid chart repository or cannot be reached: failed to fetch http://127.0.0.1:8879/charts/index.yaml : 404 Not Found
    checking that ric-common templates were added
    No results found

    Any suggestion on this?

    1. try uninstalling already-installed plugins before running the script, e.g. using "helm plugin uninstall servecm". Also better to kill existing servecm processes as well:

      ps -ef|grep servecm

      <kill all process that come up from the ps command using kill -9>

      helm plugin ls

      <should return empty list before running the "install_common_templates_to_helm.sh" script>

      1. Thanks @Thoralf Czichy for the steps, but it worked to fix the first error.

        I can still see the below mentioned error

        Error: no repo named "local" found
        Error: looks like "http://127.0.0.1:8879/charts" is not a valid chart repository or cannot be reached: failed to fetch http://127.0.0.1:8879/charts/index.yaml : 404 Not Found
        checking that ric-common templates were added
        No results found


        1. is there any chartmuseum instance listening? You should see that via netstat:
          apt install net-tools

          netstat -nptl |grep chartmuseum

          also check the file "nohup.out".

          try executing the steps form the scripts one by one.

          1. Anonymous

            I have gotten past through these errors by running the commands from the script manually and also running the other scripts ex: "setup-ric-common-template" & "verify-ric-charts" But now the error has moved to not being able to fetch the images ex:

            root@osc-ric:~/ric-dep# docker pull nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-a1:2.5.2
            Error response from daemon: Get "https://nexus3.o-ran-sc.org:10002/v2/": context deadline exceeded


            If this is a firewall issue at our end. Can you suggest a way to by-pass this ? many thanks!

            1. probably a docker HTTP proxy needed. Try

              $ vi /etc/sysconfig/docker
              with content
              DOCKER_CERT_PATH=/etc/docker
              HTTP_PROXY=http://<ip>:<port>
              HTTPS_PROXY=http://<ip>:<port>
              NO_PROXY= <just-local-IP>

              $ vi /lib/systemd/system/docker.service

              in [service] section add EnvironmentFile and replace ExecStart line as below.

              EnvironmentFile=-/etc/sysconfig/docker
              ExecStart=/usr/bin/dockerd -H fd:// \
              --registry-mirror=https:<mirror> \
              --containerd=/run/containerd/containerd.sock \
              --insecure-registry="<lcoal-ip>:<port>"

              $ systemctl daemon-reload

              $ service docker restart

              Try first the proxy setting. The registry-mirrors you'd only use if you have e.g. a mirror for docker.io

  84. Anonymous

    Hi,

    I'm able to deploy RIC and connect the docker with the simulator instance. However, when installing the HW xApp I get the following error:

    Failed to connect to helm chart repo http://0.0.0.0:8080 after 3 retries and 1.8044500350952148 seconds. (Caused by: ConnectionError)

    <curl -X GET http://localhost:8090/api/charts | jq> shows the following status:

    % Total % Received % Xferd Average Speed Time Time Time Current
    Dload Upload Total Spent Left Speed
    100 277 100 277 0 0 270k 0 --:--:-- --:--:-- --:--:-- 270k
    {
    "hw-go": [
    {
    "name": "hw-go",
    "version": "1.0.0",
    "description": "Standard xApp Helm Chart",
    "apiVersion": "v1",
    "appVersion": "1.0",
    "urls": [
    "charts/hw-go-1.0.0.tgz"
    ],
    "created": "2022-09-09T12:36:36.835817143Z",
    "digest": "6c4d658b7ff5d2bfd10d97e426d12a32d4aa4e6d8b380964577689d1bba2bea2"
    }
    ]
    }

    Is this something with the port?

    1. is there a mismatch in URLs (port 8090 and 8080 are mentioned). Make sure the environment variable CHART_REPO_URL is set correct, e.g.,

      "export CHART_REPO_URL=http://0.0.0.0:8090"

      1. Anonymous

        No there is no mismatch. The environment variable is configured like this "export CHART_REPO_URL=http://0.0.0.0:8090". But the api shows status on port 8080. SHould both of them be working on port 8080? Also config.file.json has port number 8080 mentioned


        1. I have the exact same problem. Did you find any solution?

          1. I have also the same issue. any solution here?

  85. Hello,

    ./install -f ../RECIPE_EXAMPLE/example_recipe_latest_stable.yaml
    When I tried executing the above command facing an issue and it is mentiooned below

    Error: release r4-appmgr failed: namespaces "ricplt" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "namespaces" in API group "" in the namespace "ricplt"

    Please any help on this?

  86. Hi,

    In the script example_recipe_latest_stable.yaml  there is a line "eval $(helm env |grep HELM_REPOSITORY_CACHE)" and when I tried running it encountered with an error mentioned below

    Error: unknown command "env" for "helm"

    Please any suggestions on this?

  87. Hi Guys,

    I successfully installed RIC platform but I found that influxDB pod is not installed. do I need to install it separately or am I missing something? please suggest. Thanks

    1. Hi Rajesh,


      You have to explicitly indicate his installation.

      For this, you need to add influxDB component into the line "COMPONENTS" in the file "dep/ric-dep/bin/install". Then, install de RIC platform.


      After RIC installation, you have to create a persistent volume:

      1. Create a file nano pv.yaml
      2. Add this into the file:    
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: ricplt-influxdb-meta-0
        labels:
          name: local
      spec:
        capacity:
          storage: 8Gi
        accessModes:
          - ReadWriteOnce
        hostPath:
          path: "/mnt/data"
      

            3. Apply the persistent volume

      kubectl apply -f pv.yaml
      kubectl get pv
      


      image


      BR,

      Fran


      1. The e-release demo ( 2021-12-21 Release E ) has this topic in the beginning starting at minute 1 or so. Also note that there's an .../bin/install -c argument for passing the list of optional components. The it/dev guide (which generally might be outdated) still contains the instructions for creating a storage class that is used by the influxdb PV during install. https://docs.o-ran-sc.org/projects/o-ran-sc-it-dep/en/latest/installation-guides.html#ric-platform → see section "onetime setup for influxdb". But Fran's solution should also work.

        Also note that there are currently to reviews (Youhwan Seol) for updating InfluxDB to a newer version in JIRA case, RIC-919. If you have time to spare you could try these as well and give feedback on whether they work.

      2. Thanks Fran, solution helped me to install influxDB. 

  88. Hello everybody,

    We are trying to integrate OSC near-RT RIC into our testbed. Everything in the installation  of the RIC went smoothly but when we try to conenct our RAN (based on OpenAirIterface, that has been already tested and works with ONF's RIC) it establish an SCTP connection with the RIC but in the logs of E2mgr we see the following error:

    {"crit":"ERROR","ts":1679237580640,"id":"E2Manager","msg":"#RanDisconnectionManager.DisconnectRan - RAN name: enB_macro_310_411_000e00 - Failed fetching RAN from rNib. Error: #rNibReader.getByKeyAndUnmarshal - entity of type *entities.NodebInfo not found. Key: RAN:enB_macro_310_411_000e00","mdc":{"time":"2023-03-19 14:53:00.640"}}".

    I was looking at the documentation of OSC near-RT RIC but there is nothing mentioned about this. Has anybody faced this problem? 

  89. The error seems to indicate that there was a problem already during E2 Setup. Do you have the complete logs of E2term and E2 manager? Do they indicate where the original problem might have been. It might also be useful to look at the pcap to see why a disconnect happens.

  90. Greetings ALL!

    I have followed the f-release demo video ( Link: 2022-05-24 Release F ) and successfully deployed both RIC and E2 simulator.  Currently i am trying to integrate KPMMON and QP xapp to the deployed modules. Can anyone suggest me, where can i find the KPMMON xApp and the QP xApp deployment guidelines?

    Thoralf Czichy can you please suggest?




  91. Hi,

    There seem to be some changes in the Kubernetes package repositories. https://github.com/kubernetes/release/issues/3485

    The new package repositories have packages for Kubernetes versions starting with v1.24.0 but the near RT RICs require v1.16.0. Is there any workaround?

    1. Hi,

      I am currently reporting that the Kubernetes version in the Shell script is outdated.

      As a bug reporter, I participated in the meeting where it was confirmed that this report would be reviewed, and an updated demo would be uploaded.

      RIC-1049 - Getting issue details... STATUS


  92. Hi,

    I am following up the installation guide for the f release for installation of RIC and E2 with the X-Apps. I made some changes in the script file to stop downgrading my  kubernetes and was able to initialize kubernetes with all the pods running. After that i was able to pull all the nexus images for the ricplt; here are all the files that i pulled

    REPOSITORY                                                TAG               IMAGE ID       CREATED         SIZE
    flannel/flannel                                           v0.24.3           f6f0ee58f497   8 days ago      78.6MB
    sonatype/nexus3                                           latest            2b77083fcf8c   9 days ago      569MB
    registry.k8s.io/kube-apiserver                            v1.28.7           eeb80ea66576   4 weeks ago     125MB
    registry.k8s.io/kube-scheduler                            v1.28.7           309c26d00629   4 weeks ago     59.1MB
    registry.k8s.io/kube-controller-manager                   v1.28.7           4d9d9de55f19   4 weeks ago     121MB
    registry.k8s.io/kube-proxy                                v1.28.7           123aa721f941   4 weeks ago     81.1MB
    flannel/flannel-cni-plugin                                v1.4.0-flannel1   77c1250c26d9   8 weeks ago     9.87MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-a1             3.1.1             3c3dd821a5fb   9 months ago    142MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-alarmmanager   0.5.15            46970a5c4cd3   9 months ago    109MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-dbaas          0.6.3             0f42d068b4a7   9 months ago    45.4MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-o1             0.6.2             08dbead79931   9 months ago    718MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-submgr         0.9.6             db800be1a186   9 months ago    159MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2             6.0.3             b97873ef89bb   9 months ago    261MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-rtmgr          0.9.5             56c6944edb85   9 months ago    156MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2mgr          6.0.2             08000c658dd5   9 months ago    157MB
    registry.k8s.io/etcd                                      3.5.9-0           73deb9a3f702   10 months ago   294MB
    hello-world                                               latest            d2c94e258dcb   10 months ago   13.3kB
    registry.k8s.io/coredns/coredns                           v1.10.1           ead0a4a53df8   13 months ago   53.6MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-appmgr         0.5.7             8109f32ad423   15 months ago   137MB
    registry.k8s.io/pause                                     3.9               e6f181688397   17 months ago   744kB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2             6.0.0             2c96c50b7944   21 months ago   254MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-e2mgr          6.0.0             5b867573256e   21 months ago   150MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-a1             2.5.2             57200c74cbd6   21 months ago   664MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-appmgr         0.5.6             12189533c487   21 months ago   133MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-o1             0.6.0             47c3d7b6c104   21 months ago   683MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-alarmmanager   0.5.13            b6654e03ae3f   21 months ago   104MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-submgr         0.9.3             400900150293   21 months ago   152MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-rtmgr          0.9.1             1c521c99935c   21 months ago   149MB
    rancher/mirrored-flannelcni-flannel                       v0.18.1           e237e8506509   21 months ago   62.3MB
    rancher/mirrored-flannelcni-flannel-cni-plugin            v1.1.0            fcecffc7ad4a   22 months ago   8.09MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-dbaas          0.6.1             a7ba67ed75ea   23 months ago   44.3MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-vespamgr       0.7.5             acb296049bab   2 years ago     144MB
    prom/prometheus                                           v2.18.1           de242295e225   3 years ago     140MB
    kong                                                      1.4               e4ded3c6d1ba   3 years ago     129MB
    kong/kubernetes-ingress-controller                        0.7.0             9724592c45f4   4 years ago     69.5MB
    prom/alertmanager                                         v0.20.0           0881eb8f169f   4 years ago     52.1MB
    nexus3.o-ran-sc.org:10002/o-ran-sc/it-dep-init            0.0.1             5ede324f1cda   4 years ago     97.7MB

    and After modifying the recipe file by swapping the ip adresses with my VM's IP adress and running

    ./install -f ../RECIPE_EXAMPLE/example_recipe_oran_f_release.yam

    I was getting these errors while it was executing:

    namespace/ricplt created
    namespace/ricinfra created
    namespace/ricxapp created
    Deploying RIC infra components [infrastructure dbaas appmgr rtmgr e2mgr e2term a1mediator submgr vespamgr o1mediator alarmmanager ]
    Note that the following optional components are NOT being deployed: {influxdb jaegeradapter}. To deploy them add them with -c to the default component list of the install command
    configmap/ricplt-recipe created
    Add cluster roles
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 7 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: [resource mapping not found for name: "kongconsumers.configuration.konghq.com" namespace: "" from "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "kongcredentials.configuration.konghq.com" namespace: "" from "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "kongplugins.configuration.konghq.com" namespace: "" from "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "kongingresses.configuration.konghq.com" namespace: "" from "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "r4-infrastructure-kong" namespace: "" from "": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "r4-infrastructure-prometheus-alertmanager" namespace: "" from "": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "r4-infrastructure-prometheus-server" namespace: "" from "": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "r4-infrastructure-kong" namespace: "" from "": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "r4-infrastructure-prometheus-alertmanager" namespace: "" from "": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "r4-infrastructure-prometheus-server" namespace: "" from "": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "r4-infrastructure-kong" namespace: "" from "": no matches for kind "Role" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "ricxapp-tiller-base" namespace: "ricxapp" from "": no matches for kind "Role" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "ricxapp-tiller-operation" namespace: "ricinfra" from "": no matches for kind "Role" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "ricxapp-tiller-deployer" namespace: "ricxapp" from "": no matches for kind "Role" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "tiller-secret-creator-jhxilx-secret-create" namespace: "ricinfra" from "": no matches for kind "Role" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "r4-infrastructure-kong" namespace: "ricplt" from "": no matches for kind "RoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "svcacct-tiller-ricxapp-ricxapp-tiller-base" namespace: "ricxapp" from "": no matches for kind "RoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "svcacct-tiller-ricxapp-ricxapp-tiller-operation" namespace: "ricinfra" from "": no matches for kind "RoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "svcacct-tiller-ricxapp-ricxapp-tiller-deployer" namespace: "ricxapp" from "": no matches for kind "RoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "tiller-secret-creator-jhxilx-secret-create" namespace: "ricinfra" from "": no matches for kind "RoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first]
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    NAME: r4-dbaas
    LAST DEPLOYED: Fri Mar 15 10:03:53 2024
    NAMESPACE: ricplt
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: [resource mapping not found for name: "svcacct-ricplt-appmgr-ricxapp-access" namespace: "" from "": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "svcacct-ricplt-appmgr-ricxapp-getappconfig" namespace: "" from "": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "svcacct-ricplt-appmgr-ricxapp-access" namespace: "ricplt" from "": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "svcacct-ricplt-appmgr-ricxapp-getappconfig" namespace: "ricxapp" from "": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
    ensure CRDs are installed first, resource mapping not found for name: "ingress-ricplt-appmgr" namespace: "" from "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"
    ensure CRDs are installed first]
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    NAME: r4-rtmgr
    LAST DEPLOYED: Fri Mar 15 10:04:11 2024
    NAMESPACE: ricplt
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: resource mapping not found for name: "ingress-ricplt-e2mgr" namespace: "" from "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"
    ensure CRDs are installed first
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    NAME: r4-e2term
    LAST DEPLOYED: Fri Mar 15 10:04:29 2024
    NAMESPACE: ricplt
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: resource mapping not found for name: "ingress-ricplt-a1mediator" namespace: "" from "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"
    ensure CRDs are installed first
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    NAME: r4-submgr
    LAST DEPLOYED: Fri Mar 15 10:04:47 2024
    NAMESPACE: ricplt
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    NAME: r4-vespamgr
    LAST DEPLOYED: Fri Mar 15 10:04:56 2024
    NAMESPACE: ricplt
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    NAME: r4-o1mediator
    LAST DEPLOYED: Fri Mar 15 10:05:05 2024
    NAMESPACE: ricplt
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    NAME: r4-alarmmanager
    LAST DEPLOYED: Fri Mar 15 10:05:14 2024
    NAMESPACE: ricplt
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None

    and in the end when i try to check up on the pods installed, this is what i get:

    oran2@oran2:~/ric-dep/bin$ sudo kubectl get pods -n ricplt
    NAME                                             READY   STATUS    RESTARTS   AGE
    deployment-ricplt-alarmmanager-b79b8b677-9zkbv   0/1     Pending   0          4m22s
    deployment-ricplt-e2term-alpha-dc8c6c7fd-c6dml   0/1     Pending   0          5m7s
    deployment-ricplt-o1mediator-859d697d7-pnqcw     0/1     Pending   0          4m31s
    deployment-ricplt-rtmgr-74bf48ff49-qshmr         0/1     Pending   0          5m26s
    deployment-ricplt-submgr-56974f76f6-wkkh7        0/1     Pending   0          4m49s
    deployment-ricplt-vespamgr-786666549b-mzgpw      0/1     Pending   0          4m40s
    statefulset-ricplt-dbaas-server-0                0/1     Pending   0          5m44s

    I even tried restarting it, still the same pods. where do i need to make the changes more!?

  93. I have changed the "apiextensions.k8s.io/v1beta1" to v1 for all the rbac.config, apiextensions and extensions files and i was able to push all the pods into kubernetes but it had a lot fo errors during the install recipe process.

    Here are the logs for the installation after making the above chnage manually for all the config files:

    oran2@oran2:~/ric-dep/bin$ sudo ./install -f ../RECIPE_EXAMPLE/example_recipe_oran_f_release.yaml
    namespace/ricplt created
    namespace/ricinfra created
    namespace/ricxapp created
    Deploying RIC infra components [infrastructure dbaas appmgr rtmgr e2mgr e2term a1mediator submgr vespamgr o1mediator alarmmanager ]
    Note that the following optional components are NOT being deployed: {influxdb jaegeradapter}. To deploy them add them with -c to the default component list of the install command
    configmap/ricplt-recipe created
    Add cluster roles
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 7 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    W0316 05:21:32.004392 3196847 warnings.go:70] unknown field "spec.additionalPrinterColumns"
    W0316 05:21:32.004490 3196847 warnings.go:70] unknown field "spec.validation"
    W0316 05:21:32.004511 3196847 warnings.go:70] unknown field "spec.version"
    W0316 05:21:32.005941 3196847 warnings.go:70] unknown field "spec.additionalPrinterColumns"
    W0316 05:21:32.005980 3196847 warnings.go:70] unknown field "spec.validation"
    W0316 05:21:32.005993 3196847 warnings.go:70] unknown field "spec.version"
    W0316 05:21:32.005980 3196847 warnings.go:70] unknown field "spec.validation"
    W0316 05:21:32.006200 3196847 warnings.go:70] unknown field "spec.version"
    W0316 05:21:32.006124 3196847 warnings.go:70] unknown field "spec.additionalPrinterColumns"
    W0316 05:21:32.006333 3196847 warnings.go:70] unknown field "spec.validation"
    W0316 05:21:32.006345 3196847 warnings.go:70] unknown field "spec.version"
    Error: INSTALLATION FAILED: 4 errors occurred:
            * CustomResourceDefinition.apiextensions.k8s.io "kongcredentials.configuration.konghq.com" is invalid: [spec.versions: Invalid value: []apiextensions.CustomResourceDefinitionVersion(nil): must have exactly one version marked as storage version, status.storedVersions: Invalid value: []string(nil): must have at least one stored version]
            * CustomResourceDefinition.apiextensions.k8s.io "kongplugins.configuration.konghq.com" is invalid: [spec.versions: Invalid value: []apiextensions.CustomResourceDefinitionVersion(nil): must have exactly one version marked as storage version, status.storedVersions: Invalid value: []string(nil): must have at least one stored version]
            * CustomResourceDefinition.apiextensions.k8s.io "kongingresses.configuration.konghq.com" is invalid: [spec.versions: Invalid value: []apiextensions.CustomResourceDefinitionVersion(nil): must have exactly one version marked as storage version, status.storedVersions: Invalid value: []string(nil): must have at least one stored version]
            * CustomResourceDefinition.apiextensions.k8s.io "kongconsumers.configuration.konghq.com" is invalid: [spec.versions: Invalid value: []apiextensions.CustomResourceDefinitionVersion(nil): must have exactly one version marked as storage version, status.storedVersions: Invalid value: []string(nil): must have at least one stored version]


    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    NAME: r4-dbaas
    LAST DEPLOYED: Sat Mar 16 05:21:40 2024
    NAMESPACE: ricplt
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    W0316 05:21:50.558054 3197247 warnings.go:70] unknown field "spec.rules[0].http.paths[0].backend.serviceName"
    W0316 05:21:50.558133 3197247 warnings.go:70] unknown field "spec.rules[0].http.paths[0].backend.servicePort"
    Error: INSTALLATION FAILED: 1 error occurred:
            * Ingress.networking.k8s.io "ingress-ricplt-appmgr" is invalid: spec.rules[0].http.paths[0].pathType: Required value: pathType must be specified


    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    NAME: r4-rtmgr
    LAST DEPLOYED: Sat Mar 16 05:21:58 2024
    NAMESPACE: ricplt
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    W0316 05:22:08.669731 3197593 warnings.go:70] unknown field "spec.rules[0].http.paths[0].backend.serviceName"
    W0316 05:22:08.669789 3197593 warnings.go:70] unknown field "spec.rules[0].http.paths[0].backend.servicePort"
    Error: INSTALLATION FAILED: 1 error occurred:
            * Ingress.networking.k8s.io "ingress-ricplt-e2mgr" is invalid: spec.rules[0].http.paths[0].pathType: Required value: pathType must be specified


    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    NAME: r4-e2term
    LAST DEPLOYED: Sat Mar 16 05:22:17 2024
    NAMESPACE: ricplt
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    W0316 05:22:27.047023 3197935 warnings.go:70] unknown field "spec.rules[0].http.paths[0].backend.serviceName"
    W0316 05:22:27.047317 3197935 warnings.go:70] unknown field "spec.rules[0].http.paths[0].backend.servicePort"
    Error: INSTALLATION FAILED: 1 error occurred:
            * Ingress.networking.k8s.io "ingress-ricplt-a1mediator" is invalid: spec.rules[0].http.paths[0].pathType: Required value: pathType must be specified


    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    NAME: r4-submgr
    LAST DEPLOYED: Sat Mar 16 05:22:35 2024
    NAMESPACE: ricplt
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    NAME: r4-vespamgr
    LAST DEPLOYED: Sat Mar 16 05:22:44 2024
    NAMESPACE: ricplt
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    NAME: r4-o1mediator
    LAST DEPLOYED: Sat Mar 16 05:22:53 2024
    NAMESPACE: ricplt
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    Update Complete. ⎈Happy Helming!⎈
    Saving 1 charts
    Downloading ric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    NAME: r4-alarmmanager
    LAST DEPLOYED: Sat Mar 16 05:23:02 2024
    NAMESPACE: ricplt
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None

    and when i checked my pods, all 13 pods were there, but all of them were saying pending

    oran2@oran2:~/ric-dep/bin$ sudo kubectl get pods -n ricplt
    NAME                                                         READY   STATUS    RESTARTS   AGE
    deployment-ricplt-a1mediator-55cb5df77d-nr4dp                0/1     Pending   0          56s
    deployment-ricplt-alarmmanager-b79b8b677-rgrgg               0/1     Pending   0          20s
    deployment-ricplt-appmgr-795f4457f7-67lqh                    0/1     Pending   0          93s
    deployment-ricplt-e2mgr-85dcd5cc46-d7djj                     0/1     Pending   0          75s
    deployment-ricplt-e2term-alpha-dc8c6c7fd-glqdj               0/1     Pending   0          65s
    deployment-ricplt-o1mediator-859d697d7-scp48                 0/1     Pending   0          30s
    deployment-ricplt-rtmgr-74bf48ff49-f4hsv                     0/1     Pending   0          84s
    deployment-ricplt-submgr-56974f76f6-pdmhr                    0/1     Pending   0          48s
    deployment-ricplt-vespamgr-786666549b-kk6md                  0/1     Pending   0          39s
    r4-infrastructure-kong-7d766b49bb-t5k58                      0/2     Pending   0          111s
    r4-infrastructure-prometheus-alertmanager-64f9876d6d-4hsb5   0/2     Pending   0          111s
    r4-infrastructure-prometheus-server-bcc8cc897-hntwt          0/1     Pending   0          111s
    statefulset-ricplt-dbaas-server-0                            0/1     Pending   0          102sng.

    What else could be done to be rid of the errors.

    NOTE:

    ->I am using a cri-docker to pull the images

    →I have treid restarting(rebooting)the vm, still the same.

  94. Greetings Thoralf Czichy SUNIL SINGH and OSC community!

    Previously i have deployed the OSC F-release and successfully integrated the Near-RT RIC and E2 simulator.

    Unfortunately when i am replicating the same, it got stuck while installing both k8s and helm.

    Understood that kubernetes 1.16.0 is not supporting by apt repository and changed to 1.18.0, but unable to deploy it successfully.

    Please kindly suggest if any ideas you have?

    BR,

    Venkatesh G

  95. Hello Venkatesh, We have installed k8s 1.28 on top of it we can install RIC.


    please find below ticket for your reference,


    [RIC-1052] Not able to install k8s and helm with Existing Ric installation shell script in document - ORAN Jira (o-ran-sc.org)

    1. Thank you Anusha Nalluri for your timely help and it is working after doing some small changes.

      And we are incorporating few more instructions to the shared document based on the issues we encountered during our deployment!!

      BR

      Venkatesh G

  96. No problem , please share your issues which you encountered during your deployment.

    1. Hi, I'm facing a problem following your instructions. I get the following error.

      sudo kubectl get pods -n kube-system
      E0328 16:05:16.572868   23283 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
      E0328 16:05:16.573067   23283 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
      E0328 16:05:16.574274   23283 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
      E0328 16:05:16.575594   23283 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
      E0328 16:05:16.576829   23283 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
      The connection to the server localhost:8080 was refused - did you specify the right host or port?


      Kubernetes API server seems to be not running and I guess it has something to do with the kubeconfig environment variable. But if someone has experienced this issue can suggest some solutions. Thanks!

  97. Hope you have done below cmd's,


    $ mkdir -p $HOME/.kube

    $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

    $ sudo chown $(id -u):$(id -g) $HOME/.kube/config

    1. Yes, I did them. 

  98. Did you export ?

    export KUBECONFIG=/etc/kubernetes/admin.conf

    1. Thanks, it worked for me. But while installing the ric components from the yaml file for f-release I face some errors making my pods go into pending state. 

      Downloading ric-common from repo http://127.0.0.1:8879/charts
      Deleting outdated charts
      W0402 11:12:49.778225   25732 warnings.go:70] unknown field "spec.rules[0].http.paths[0].backend.serviceName"
      W0402 11:12:49.778245   25732 warnings.go:70] unknown field "spec.rules[0].http.paths[0].backend.servicePort"
      Error: INSTALLATION FAILED: 1 error occurred:
          * Ingress.networking.k8s.io "ingress-ricplt-appmgr" is invalid: spec.rules[0].http.paths[0].pathType: Required value: pathType must be specified

      W0402 11:13:06.257110   25880 warnings.go:70] unknown field "spec.rules[0].http.paths[0].backend.serviceName"
      W0402 11:13:06.257126   25880 warnings.go:70] unknown field "spec.rules[0].http.paths[0].backend.servicePort"
      Error: INSTALLATION FAILED: 1 error occurred:
          * Ingress.networking.k8s.io "ingress-ricplt-e2mgr" is invalid: spec.rules[0].http.paths[0].pathType: Required value: pathType must be specified

      W0402 11:13:06.257110   25880 warnings.go:70] unknown field "spec.rules[0].http.paths[0].backend.serviceName"
      W0402 11:13:06.257126   25880 warnings.go:70] unknown field "spec.rules[0].http.paths[0].backend.servicePort"
      Error: INSTALLATION FAILED: 1 error occurred:
          * Ingress.networking.k8s.io "ingress-ricplt-e2mgr" is invalid: spec.rules[0].http.paths[0].pathType: Required value: pathType must be specified