Skip to end of metadata
Go to start of metadata
VM Minimum Requirements for RIC 22

NOTE: sudo access is required for installation

Getting Started PDF

Step 1: Obtaining the Deployment Scripts and Charts

Run ...

$ sudo -i

$ git clone http://gerrit.o-ran-sc.org/r/it/dep -b bronze

$ cd dep
$ git submodule update --init --recursive --remote
Step 2: Generation of cloud-init Script 

Run ...

$ cd tools/k8s/bin
./gen-cloud-init.sh   # will generate the stack install script for what RIC needs

Note: The outputted script is will be used for preparing K8 cluster for RIC deployment (k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh)

Step 3: Installation of Kubernetes, Helm, Docker, etc.

Run ...

$ ./k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh

NOTE: Be patient as this takes some time to complete. Upon completion of this script, the VM will be rebooted.  You will then need to login to the VM and run sudo once again.

$ sudo -i

$ kubectl get pods --all-namespaces  # There should be  9 pods running in kube-system namespace.

Step 4:  Deploy RIC using Recipe

Run ...

$ cd dep/bin
$ ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml
$ kubectl get pods -n ricplt   # There should be ~16 pods running in the ricplt namespace.  


Step 5:  Onboarding a Test xAPP (HelloWorld xApp)

NOTE: If using a version less than Ubuntu 18.04 this section will fail!
Run...

$ cd dep

# Create the file that will contain the URL used to start the on-boarding process...
$ echo '{ "config-file.json_url": "https://gerrit.o-ran-sc.org/r/gitweb?p=ric-app/hw.git;a=blob_plain;f=init/config-file.json;hb=HEAD" }' > onboard.hw.url

# Start on-boarding process...

curl --location --request POST "http://$(hostname):32080/onboard/api/v1/onboard/download"  --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"


# Verify list of all on-boarded xApps...
$ curl --location --request GET "http://$(hostname):32080/onboard/api/v1/charts"
Step 6:  Deploy Test xApp (HelloWorld xApp)

Run..

#  Verify xApp is not running...  This may take a minute so refresh the command below

kubectl get pods -n ricxapp


# Call xApp Manager to deploy HelloWorld xApp...

curl --location --request POST "http://$(hostname):32080/appmgr/ric/v1/xapps"  --header 'Content-Type: application/json'  --data-raw '{"xappName": "hwxapp"}'


#  Verify xApp is running...

kubectl get pods -n ricxapp


#  View logs...

$ kubectl logs -n ricxapp <name of POD retrieved from statement above>


Helpful Hints

Kubectl commads:

kubectl get pods -n nampespace - gets a list of Pods running

kubectl get logs -n namespace name_of_running_pod






Complete these tasks to get started


Recent space activity

Space contributors

{"mode":"list","scope":"descendants","limit":"5","showLastTime":"true","order":"update","contextEntityId":20875202}

  • No labels

32 Comments

  1. Thanks for the guidelines. It works well.

  2. Hi experts,

    after deployment, i have only 15 pods in ns ricplt, is it normal?

    The "Step 4" says: "kubectl get pods -n ricplt   # There should be ~16 pods running in the ricplt namespace.".

    (18:11 dabs@ricpltbronze bin) > sudo kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5644d7b6d9-clvtf 1/1 Running 5 32h
    kube-system coredns-5644d7b6d9-jwxnm 1/1 Running 5 32h
    kube-system etcd-ricpltbronze 1/1 Running 11 32h
    kube-system kube-apiserver-ricpltbronze 1/1 Running 28 32h
    kube-system kube-controller-manager-ricpltbronze 1/1 Running 9 32h
    kube-system kube-flannel-ds-amd64-mrwn2 1/1 Running 16 32h
    kube-system kube-proxy-zrtl8 1/1 Running 6 32h
    kube-system kube-scheduler-ricpltbronze 1/1 Running 8 32h
    kube-system tiller-deploy-68bf6dff8f-wbmwl 1/1 Running 4 32h
    ricinfra deployment-tiller-ricxapp-d4f98ff65-6h4n4 1/1 Running 0 3h13m
    ricinfra tiller-secret-generator-tgkzf 0/1 Completed 0 132m
    ricinfra tiller-secret-generator-zcx72 0/1 Error 0 3h13m
    ricplt deployment-ricplt-a1mediator-66fcf76c66-h6rp2 1/1 Running 0 40m
    ricplt deployment-ricplt-alarmadapter-64d559f769-glb5z 1/1 Running 0 30m
    ricplt deployment-ricplt-appmgr-6fd6664755-2mxjb 1/1 Running 0 42m
    ricplt deployment-ricplt-e2mgr-8479fb5ff8-9zqbp 1/1 Running 0 41m
    ricplt deployment-ricplt-e2term-alpha-bcb457df4-4dz62 1/1 Running 0 40m
    ricplt deployment-ricplt-jaegeradapter-84558d855b-tmqqb 1/1 Running 0 39m
    ricplt deployment-ricplt-o1mediator-d8b9fcdf-f4sgm 1/1 Running 0 34m
    ricplt deployment-ricplt-rtmgr-9d4847788-kf6r4 1/1 Running 10 41m
    ricplt deployment-ricplt-submgr-65dc9f4995-gt5kb 1/1 Running 0 40m
    ricplt deployment-ricplt-vespamgr-7458d9b5d-klh9l 1/1 Running 0 39m
    ricplt deployment-ricplt-xapp-onboarder-546b86b5c4-xkcpt 2/2 Running 0 42m
    ricplt r4-infrastructure-kong-6c7f6db759-7xjqm 2/2 Running 21 3h13m
    ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-jfkdg 2/2 Running 2 3h13m
    ricplt r4-infrastructure-prometheus-server-5fd7695-pprg2 1/1 Running 2 3h13m
    ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 43m

    1. Anonymous

      After deployment I'm also getting only 15 pods running. Is it normal or do I need to worry about that?

      1. it's normal, and you are good to go. just try policy management.

  3. Hi,

    Even though deployment of RIC plt did not give any errors, most of the RIC plt and infra pods remain in error state. 

    kubectl get pods --all-namespaces
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5644d7b6d9-dz774 1/1 Running 1 10h
    kube-system coredns-5644d7b6d9-fp586 1/1 Running 1 10h
    kube-system etcd-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-apiserver-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-controller-manager-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system kube-flannel-ds-amd64-b4l97 1/1 Running 1 10h
    kube-system kube-proxy-fxfrk 1/1 Running 1 10h
    kube-system kube-scheduler-ggnlabvm-bng35 1/1 Running 1 10h
    kube-system tiller-deploy-68bf6dff8f-jvtk7 1/1 Running 1 10h
    ricinfra deployment-tiller-ricxapp-d4f98ff65-hcnqf 0/1 ContainerCreating 0 9h
    ricinfra tiller-secret-generator-d7kmk 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-a1mediator-66fcf76c66-l2z7m 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-alarmadapter-64d559f769-7d8fq 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-appmgr-6fd6664755-7lp8q 0/1 Init:ImagePullBackOff 0 9h
    ricplt deployment-ricplt-e2mgr-8479fb5ff8-ggnx8 0/1 ErrImagePull 0 9h
    ricplt deployment-ricplt-e2term-alpha-bcb457df4-dkdbc 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-jaegeradapter-84558d855b-bpzcv 1/1 Running 0 9h
    ricplt deployment-ricplt-o1mediator-d8b9fcdf-5ptcs 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-rtmgr-9d4847788-rvnrx 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-submgr-65dc9f4995-cbhvc 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-vespamgr-7458d9b5d-bkzpg 0/1 ImagePullBackOff 0 9h
    ricplt deployment-ricplt-xapp-onboarder-546b86b5c4-g2dnt 1/2 ImagePullBackOff 0 9h
    ricplt r4-infrastructure-kong-6c7f6db759-4czbj 2/2 Running 2 9h
    ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-gwxzz 2/2 Running 0 9h
    ricplt r4-infrastructure-prometheus-server-5fd7695-phcqs 1/1 Running 0 9h
    ricplt statefulset-ricplt-dbaas-server-0 0/1 ImagePullBackOff 0 9h


    any clue?

    1. Anonymous

      The ImagePullBackOff and ErrrImagePull errors are all for container images built from O-RAN SC code.  It appears that there is a problem for the docker engine in your installation fetching images from the O-RAN SC docker registry.  Oftentimes this is due to local firewall blocking such connections.

      You may want to try:  docker pull nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-a1:2.1.9 to see if your docker engine can retrieve this image. 

      1. Thanks for the information.

        I think you are right, even docker pull nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-a1:2.1.9 is giving an error of connection refused..

        although I am checking for the corporate firewall to disable it, the strange thing is docker pull hello-world and other such pulls are working fine.

        1. "Connection refused" error does suggest network connection problem.

          These O-RAN SC docker registries use port 10001 ~ 10004, in particular 10002 for all released docker images.  They are not the default docker registry of 5000.  It is possible that your local firewall has a rule allowing outgoing connections into port 5000, but not these ports.

          1. Anonymous

            Thanks for the explanation. you are right these ports were blocked on firewall, working fine now.
            currently i am stuck with opening o1 dashboard, trying it on smo machine.

          2. thanks for the explanation. you are right these particular ports were getting blocked by firewall, got it working now.

            currently i am stuck in opening o1 dashboard, trying it on smo machine.

  4. Anonymous

    Hi all,

    after executing the below POST command. Iam not getting any response.

    # Start on-boarding process...

    curl --location --request POST "http://$(hostname):32080/onboard/api/v1/onboard/download"  --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"


    any clues/suggestions?

    1. it's probably caused by port 32080 has already been occupied by kube-proxy.

      you can either try using direct ip address, or using port-forwarding as a workaround. pls refer to my post: https://blog.csdn.net/jeffyko/article/details/107426626

      1. Anonymous

        Thanks a lot, it resolved the issue and we are able to proceed further,

        After executing the below command we are seeing the below logs. Is this the correct behavior?? In case this is the error, how to resolve this??

        $ kubectl logs -n ricxapp <name of POD retrieved from statement above>

        Error from server (BadRequest): container "hwxapp" in pod "ricxapp-hwxapp-684d8d675b-xz99d" is waiting to start: ContainerCreating

        1. make sure hwxapp Pod is running. It needs pull the 'hwxapp' docker image, which may take some time.

          kubectl get pods -n ricxapp | grep hwxapp

      2. Anonymous

        Hi Zhengwei, I have tried the solution in your blog, but I still receive the same error: AttributeError:  'ConnectionError' object has no attribute 'message'. Do you have any clues? Thanks.

      3. EDIT: Nevermind, silly me.  For anyone wondering, the port forwarding command needs to keep running for the forwarding to stay active.  So i just run it in a Screen tab keep it running there, and run the curl commands in a different tab

        Hi, I've tried this workaround but the port forwarding command just hangs and never completes.  Anyone experiencing the same issue? Kube cluster and pods seem healthy:

        root@ubuntu1804:~# kubectl -v=4 port-forward r4-infrastructure-kong-6c7f6db759-kkjtt 32088:32080 -n ricplt

        Forwarding from 127.0.0.1:32088 -> 32080

        (hangs after the previous message)



  5. Anonymous

    In my case, the script only works with helm 2.x. how about others? 

  6. THANK You! This is great!

  7. Anonymous

    Hi all,

    I use the source code to build the hw_unit_test container and I execute the hw_unit_test, it stops and returns "SHAREDDATALAYER_ABORT, file src/redis/asynchirediscommanddispatcher.cpp, line 206: Required Redis module extension commands not available.
    Aborted (core dumped)".

    Thus, I can't tset my hwxapp.

    any clues/suggestions? Thanks!

      

  8. Anonymous

    Hi all,

    There is a pod crash every time. When I run the following command, I got one crashed pod. Did you meet the same case? Thanks so much for giving me some suggestions.

    root@chuanhao-HP-EliteDesk-800-G4-SFF:~/dep/bin# kubectl get pods -n ricplt

    NAME READY STATUS RESTARTS AGE

    deployment-ricplt-a1mediator-66fcf76c66-f6kbh 1/1 Running 1 2m16s

    deployment-ricplt-alarmadapter-64d559f769-twfk7 1/1 Running 0 46s

    deployment-ricplt-appmgr-6fd6664755-7rs4g 1/1 Running 0 3m49s

    deployment-ricplt-e2mgr-8479fb5ff8-j9nzf 1/1 Running 0 3m

    deployment-ricplt-e2term-alpha-bcb457df4-r22nb 1/1 Running 0 2m39s

    deployment-ricplt-jaegeradapter-84558d855b-xfgd5 1/1 Running 0 78s

    deployment-ricplt-o1mediator-d8b9fcdf-tpz7v 1/1 Running 0 64s

    deployment-ricplt-rtmgr-9d4847788-scrxf 1/1 Running 1 3m26s

    deployment-ricplt-submgr-65dc9f4995-knzjd 1/1 Running 0 113s

    deployment-ricplt-vespamgr-7458d9b5d-mdmjx 1/1 Running 0 96s

    deployment-ricplt-xapp-onboarder-546b86b5c4-z2qd6 2/2 Running 0 4m16s

    r4-infrastructure-kong-6c7f6db759-44wdx 1/2 CrashLoopBackOff 6 4m52s

    r4-infrastructure-prometheus-alertmanager-75dff54776-qlp4g 2/2 Running 0 4m52s

    r4-infrastructure-prometheus-server-5fd7695-lr6z7 1/1 Running 0 4m52s

    statefulset-ricplt-dbaas-server-0 1/1 Running 0 4m33s


  9. Hi All,

    I am trying to deploy RIC and at step 4 while run command "./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml". I am getting this below error:

    root@ubuntu:~/dep/bin# ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml
    Error: unknown command "home" for "helm"
    Run 'helm --help' for usage.
    Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Successfully packaged chart and saved it to: /tmp/aux-common-3.0.0.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Successfully packaged chart and saved it to: /tmp/nonrtric-common-2.0.0.tgz
    cp: cannot create regular file '/repository/local/': No such file or directory
    Error: open /repository/local/index.yaml822716896: no such file or directory
    Error: no repositories configured
    Error: looks like "http://127.0.0.1:8879/charts" is not a valid chart repository or cannot be reached: Get "http://127.0.0.1:8879/charts/index.yaml": dial tcp 127.0.0.1:8879: connect: connection refused.

    I am unable to resolve this issue. Could any one please help to resolve this.

    Thanks,

    1. I had a different error but maybe it helps, I updated in the example_recipe.yaml file the helm version to 2.17.0.

      I had this error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached:

    2. Try this . it might help
      helm
      init --client-only --skip-refresh helm repo rm stable helm repo add stable https://charts.helm.sh/
      1. Anonymous

        Problem still exists

  10. Hey all,

    I am trying to deploy the RIC in it's Cherry-Release Version.

    But i am facing some issues:

    a) 

    The r4-alarmadater isnt getting released

    End of deploy-ric-platform script
    Error: release r4-alarmadapter failed: configmaps "configmap-ricplt-alarmadapter-appconfig" already exists
    root@max-near-rt-ric-cherry:/home/ubuntu/dep/bin# helm ls --all r4-alarmadapter
    + helm ls --all r4-alarmadapter
    NAME   REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
    r4-alarmadapter  1 Mon Jan 11 09:14:08 2021 FAILED alarmadapter-3.0.0 1.0 ricplt

    b)

    The pods aren't getting to their Running state. This might be connected to issue a) but I am not sure.

    kubectl get pods --all-namspaces:

    NAMESPACE     NAME                                                         READY   STATUS              RESTARTS   AGE
    kube-system   coredns-5644d7b6d9-794qq                                     1/1     Running             1          29m
    kube-system   coredns-5644d7b6d9-ph6tt                                     1/1     Running             1          29m
    kube-system   etcd-max-near-rt-ric-cherry                                  1/1     Running             1          28m
    kube-system   kube-apiserver-max-near-rt-ric-cherry                        1/1     Running             1          28m
    kube-system   kube-controller-manager-max-near-rt-ric-cherry               1/1     Running             1          28m
    kube-system   kube-flannel-ds-ljz7w                                        1/1     Running             1          29m
    kube-system   kube-proxy-cdkf4                                             1/1     Running             1          29m
    kube-system   kube-scheduler-max-near-rt-ric-cherry                        1/1     Running             1          28m
    kube-system   tiller-deploy-68bf6dff8f-xfkwd                               1/1     Running             1          27m
    ricinfra      deployment-tiller-ricxapp-d4f98ff65-wwbhx                    0/1     ContainerCreating   0          25m
    ricinfra      tiller-secret-generator-2nsb2                                0/1     ContainerCreating   0          25m
    ricplt        deployment-ricplt-a1mediator-66fcf76c66-njphv                0/1     ContainerCreating   0          23m
    ricplt        deployment-ricplt-appmgr-6fd6664755-r577d                    0/1     Init:0/1            0          24m
    ricplt        deployment-ricplt-e2mgr-6dfb6c4988-tb26k                     0/1     ContainerCreating   0          24m
    ricplt        deployment-ricplt-e2term-alpha-64965c46c6-5d59x              0/1     ContainerCreating   0          23m
    ricplt        deployment-ricplt-jaegeradapter-76ddbf9c9-fw4sh              0/1     ContainerCreating   0          22m
    ricplt        deployment-ricplt-o1mediator-d8b9fcdf-86qgg                  0/1     ContainerCreating   0          22m
    ricplt        deployment-ricplt-rtmgr-6d559897d8-jvbsb                     0/1     ContainerCreating   0          24m
    ricplt        deployment-ricplt-submgr-65bcd95469-nc8pq                    0/1     ContainerCreating   0          23m
    ricplt        deployment-ricplt-vespamgr-7458d9b5d-xlt8m                   0/1     ContainerCreating   0          22m
    ricplt        deployment-ricplt-xapp-onboarder-5958856fc8-kw9jx            0/2     ContainerCreating   0          25m
    ricplt        r4-infrastructure-kong-6c7f6db759-q5psw                      0/2     ContainerCreating   0          25m
    ricplt        r4-infrastructure-prometheus-alertmanager-75dff54776-mb8hn   0/2     ContainerCreating   0          25m
    ricplt        r4-infrastructure-prometheus-server-5fd7695-bvk74            1/1     Running             0          25m
    ricplt        statefulset-ricplt-dbaas-server-0                            0/1     ContainerCreating   0          25m

    Thanks in advance!
  11. Hi All,

    I am trying to install rmr_nng library as a requirement of ric-app-kpimon  xapp.

    https://github.com/o-ran-sc/ric-plt-lib-rmr

    https://github.com/nanomsg/nng

    but getting below error(snippet):

    -- Installing: /root/ric-plt-lib-rmr/.build/lib/cmake/nng/nng-config-version.cmake
    -- Installing: /root/ric-plt-lib-rmr/.build/bin/nngcat
    -- Set runtime path of "/root/ric-plt-lib-rmr/.build/bin/nngcat" to "/root/ric-plt-lib-rmr/.build/lib"
    [ 40%] No test step for 'ext_nng'
    [ 41%] Completed 'ext_nng'
    [ 41%] Built target ext_nng
    Scanning dependencies of target nng_objects
    [ 43%] Building C object src/rmr/nng/CMakeFiles/nng_objects.dir/src/rmr_nng.c.o
    In file included from /root/ric-plt-lib-rmr/src/rmr/nng/src/rmr_nng.c:70:0:
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c: In function ‘roll_tables’:
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:406:28: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtgate’; did you mean ‘rtable’?
    pthread_mutex_lock( ctx->rtgate ); // must hold lock to move to active
    ^~~~~~
    rtable
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:409:30: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtgate’; did you mean ‘rtable’?
    pthread_mutex_unlock( ctx->rtgate );
    ^~~~~~
    rtable
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c: In function ‘parse_rt_rec’:
    /root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:858:12: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtable_ready’; did you mean ‘rmr_ready’?
    ctx->rtable_ready = 1; // route based sends can now happen
    ^~~~~~~~~~~~
    rmr_ready

    ...

    ...

    I am unable to resolve this issue. Could any one please help to resolve this.

    Thanks,


  12. Hi All, 

    I am trying to get this up and running (first time ever).  

    For anyone interested, just wanted to highlight that if Step 4 fails with the following:

    root@ubuntu1804:~/dep/bin# ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml
    Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz
    Successfully packaged chart and saved it to: /tmp/aux-common-3.0.0.tgz
    Successfully packaged chart and saved it to: /tmp/nonrtric-common-2.0.0.tgz
    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
    ****************************************************************************************************************
                                                         ERROR                                                      
    ****************************************************************************************************************
    Can't locate the ric-common helm package in the local repo. Please make sure that it is properly installed.
    ****************************************************************************************************************

    You just need to initialize Helm repositories with the following (disregard the suggestion in the above error output as it's deprecated from Nov 2020):

    helm init --stable-repo-url=https://charts.helm.sh/stable --client-only 

    and then run Step 4.



  13. Anonymous

    Hi,

    We are trying to compile the pods on Arm and test them on Arm-based platform. We are failing for xapp-onboarder, ric-plt-e, ric-plt-alarmadapter. Has anyone tried it running the o-ran ric on Arm? Could someone point us to the recipe to build the images from the source? 

  14. Hi,

    I'm in the process of porting the near realtime RIC to ARM. The process has been straight forward up until now. Does anyone have instructions on how all the individual components are built or where they can be sourced outside of the prebuilt images. I have been able to build many items needed to complete the built but still having issues.

    images built:
    ric-plt-rtmgr:0.6.3
    it-dep-init:0.0.1
    ric-plt-rtmgr:0.6.3
    ric-plt-appmgr:0.4.3
    bldr-alpine3-go:2.0.0
    bldr-ubuntu18-c-go:1.9.0
    bldr-ubuntu18-c-go:9-u18.04
    ric-plt-a1:2.1.9
    ric-plt-dbaas:0.2.2
    ric-plt-e2mgr:5.0.8
    ric-plt-submgr:0.4.3
    ric-plt-vespamgr:0.4.0
    ric-plt-o1:0.4.4
    bldr-alpine3:12-a3.11
    bldr-alpine3-mdclog:0.0.4
    bldr-alpine3-rmr:4.0.5
    
    
    images needed:
    xapp-onboarder:1.0.7
    ric-plt-e2:5.0.8
    ric-plt-alarmadapter:0.4.5
    
    

    Please keep in mind I'm not sure if i built these all correctly since I have not found any instructions. The latest outputs of my effort can be seen here:

    NAME                                                         READY   STATUS                       RESTARTS   AGE
    deployment-ricplt-a1mediator-cb47dc85d-7cr9b                 0/1     CreateContainerConfigError   0          76s
    deployment-ricplt-e2term-alpha-8665fc56f6-m94ql              0/1     ImagePullBackOff             0          92s
    deployment-ricplt-jaegeradapter-76ddbf9c9-hg5pr              0/1     Error                        2          26s
    deployment-ricplt-o1mediator-86587dd94f-gbtvj                0/1     CreateContainerConfigError   0          9s
    deployment-ricplt-rtmgr-67c9bdccf6-g4nd6                     0/1     CreateContainerConfigError   0          63m
    deployment-ricplt-submgr-6ffd499fd5-6txqb                    0/1     CreateContainerConfigError   0          59s
    deployment-ricplt-vespamgr-68b68b78db-5m579                  1/1     Running                      0          43s
    deployment-ricplt-xapp-onboarder-579967799d-bp74x            0/2     ImagePullBackOff             17         63m
    r4-infrastructure-kong-6c7f6db759-4vlms                      0/2     CrashLoopBackOff             17         64m
    r4-infrastructure-prometheus-alertmanager-75dff54776-cq7fl   2/2     Running                      0          64m
    r4-infrastructure-prometheus-server-5fd7695-5jrwf            1/1     Running                      0          64m

    Any help would be great. My end goal is to push these changes upstream and to provide CI/CD for ARM images. 

    1. NAMESPACE     NAME                                                         READY   STATUS      RESTARTS   AGE
      kube-system   coredns-5644d7b6d9-6nwr8                                     1/1     Running     14         20d
      kube-system   coredns-5644d7b6d9-8lc5c                                     1/1     Running     14         20d
      kube-system   etcd-ip-10-0-0-199                                           1/1     Running     14         20d
      kube-system   kube-apiserver-ip-10-0-0-199                                 1/1     Running     14         20d
      kube-system   kube-controller-manager-ip-10-0-0-199                        1/1     Running     14         20d
      kube-system   kube-flannel-ds-4w4jh                                        1/1     Running     14         20d
      kube-system   kube-proxy-955gz                                             1/1     Running     14         20d
      kube-system   kube-scheduler-ip-10-0-0-199                                 1/1     Running     14         20d
      kube-system   tiller-deploy-7d5568dd96-ttbg5                               1/1     Running     1          20h
      ricinfra      deployment-tiller-ricxapp-6b6b4c787-w2nnw                    1/1     Running     0          2m40s
      ricinfra      tiller-secret-generator-c7z86                                0/1     Completed   0          2m40s
      ricplt        deployment-ricplt-e2term-alpha-8665fc56f6-q8cgq              0/1     Running     1          72s
      ricplt        deployment-ricplt-jaegeradapter-85cbdfbfbc-xlxt2             1/1     Running     0          12s
      ricplt        deployment-ricplt-rtmgr-67c9bdccf6-c9fck                     1/1     Running     0          101s
      ricplt        deployment-ricplt-submgr-6ffd499fd5-pmx4m                    1/1     Running     0          43s
      ricplt        deployment-ricplt-vespamgr-68b68b78db-4skbw                  1/1     Running     0          28s
      ricplt        deployment-ricplt-xapp-onboarder-57f78cfdf-xrtss             2/2     Running     0          2m10s
      ricplt        r4-infrastructure-kong-84cd44455-6pgxh                       2/2     Running     1          2m40s
      ricplt        r4-infrastructure-prometheus-alertmanager-75dff54776-fl4xh   2/2     Running     0          2m40s
      ricplt        r4-infrastructure-prometheus-server-5fd7695-jv8s4            1/1     Running     0          2m40s

      I have been able to get all services up except for e2term-alpha, I am getting an error:

      environments service name is SERVICE_RICPLT_E2TERM_RMR_ALPHA_SERVICE_HOST
      service ip is 10.X.X.95
      nano=38000
      loglevel=error
      volume=log
      #The key name of the environment holds the local ip address
      #ip address of the E2T in the RMR
      local-ip=10.X.X.95
      #prometheus mode can be pull or push
      prometheusMode=pull
      #timeout can be from 5 seconds to 300 seconds default is 10
      prometheusPushTimeOut=10
      prometheusPushAddr=127.0.0.1:7676
      prometheusPort=8088
      #trace is start, stop
      trace=stop
      external-fqdn=10.X.X.95
      #put pointer to the key that point to pod name
      pod_name=E2TERM_POD_NAME
      sctp-port=36422
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = nano=38000 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = nano value = 38000"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = loglevel=error "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = loglevel value = error"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = volume=log "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = volume value = log"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = local-ip=10.107.0.95 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = local-ip value = 10.107.0.95"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusMode=pull "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusMode value = pull"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPushTimeOut=10 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPushTimeOut value = 10"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPushAddr=127.0.0.1:7676 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPushAddr value = 127.0.0.1:7676"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = prometheusPort=8088 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = prometheusPort value = 8088"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = trace=stop "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = trace value = stop"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = external-fqdn=10.107.0.95 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = external-fqdn value = 10.107.0.95"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = pod_name=E2TERM_POD_NAME "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = pod_name value = E2TERM_POD_NAME"}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"line = sctp-port=36422 "}
      {"ts":1613156066064,"crit":"INFO","id":"E2Terminator","mdc":{},"msg":"entry = sctp-port value = 36422"}
      1613156066 22/RMR [INFO] ric message routing library on SI95/g mv=3 flg=00 (84423e6 4.4.6 built: Feb 11 2021)
      e2: malloc.c:2401: sysmalloc: Assertion `(old_top == initial_top (av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && prev_inuse (old_top) && ((unsigned long) old_end & (pagesize - 1)) == 0)' failed.


      I'm not sure what to do next.

  15. Hi all,

    I deployed the RIC and the SMO in two different VMs and I cannot manage to establish a connection between them to validate the A1 workflows. I suspect the reason is that the RIC VM is not exposing its services to the outside world. Could anyone help me address the issue? For example I would appreciate that if you have a setup like mine you could share the "example_recipe.yaml" you used to deploy the ric platform.

    I paste here the outcomes of the services running in ricplt namespace, where in all services it says EXTERNAL_IP = <none>. Please let me know if this is the expected behavior

    # kubectl get services -n ricplt
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    aux-entry ClusterIP 10.99.120.87 <none> 80/TCP,443/TCP 3d6h
    r4-infrastructure-kong-proxy NodePort 10.100.225.53 <none> 32080:32080/TCP,32443:32443/TCP 3d6h
    r4-infrastructure-prometheus-alertmanager ClusterIP 10.105.181.117 <none> 80/TCP 3d6h
    r4-infrastructure-prometheus-server ClusterIP 10.108.80.71 <none> 80/TCP 3d6h
    service-ricplt-a1mediator-http ClusterIP 10.111.176.180 <none> 10000/TCP 3d6h
    service-ricplt-a1mediator-rmr ClusterIP 10.105.1.73 <none> 4561/TCP,4562/TCP 3d6h
    service-ricplt-alarmadapter-http ClusterIP 10.104.93.39 <none> 8080/TCP 3d6h
    service-ricplt-alarmadapter-rmr ClusterIP 10.99.249.253 <none> 4560/TCP,4561/TCP 3d6h
    service-ricplt-appmgr-http ClusterIP 10.101.144.152 <none> 8080/TCP 3d6h
    service-ricplt-appmgr-rmr ClusterIP 10.110.71.188 <none> 4561/TCP,4560/TCP 3d6h
    service-ricplt-dbaas-tcp ClusterIP None <none> 6379/TCP 3d6h
    service-ricplt-e2mgr-http ClusterIP 10.103.232.14 <none> 3800/TCP 3d6h
    service-ricplt-e2mgr-rmr ClusterIP 10.105.41.119 <none> 4561/TCP,3801/TCP 3d6h
    service-ricplt-e2term-rmr-alpha ClusterIP 10.110.201.91 <none> 4561/TCP,38000/TCP 3d6h
    service-ricplt-e2term-sctp-alpha NodePort 10.106.243.219 <none> 36422:32222/SCTP 3d6h
    service-ricplt-jaegeradapter-agent ClusterIP 10.102.5.160 <none> 5775/UDP,6831/UDP,6832/UDP 3d6h
    service-ricplt-jaegeradapter-collector ClusterIP 10.99.78.42 <none> 14267/TCP,14268/TCP,9411/TCP 3d6h
    service-ricplt-jaegeradapter-query ClusterIP 10.100.166.185 <none> 16686/TCP 3d6h
    service-ricplt-o1mediator-http ClusterIP 10.101.145.78 <none> 9001/TCP,8080/TCP,3000/TCP 3d6h
    service-ricplt-o1mediator-tcp-netconf NodePort 10.110.230.66 <none> 830:30830/TCP 3d6h
    service-ricplt-rtmgr-http ClusterIP 10.104.100.69 <none> 3800/TCP 3d6h
    service-ricplt-rtmgr-rmr ClusterIP 10.98.40.180 <none> 4561/TCP,4560/TCP 3d6h
    service-ricplt-submgr-http ClusterIP None <none> 3800/TCP 3d6h
    service-ricplt-submgr-rmr ClusterIP None <none> 4560/TCP,4561/TCP 3d6h
    service-ricplt-vespamgr-http ClusterIP 10.102.12.213 <none> 8080/TCP,9095/TCP 3d6h
    service-ricplt-xapp-onboarder-http ClusterIP 10.107.167.33 <none> 8888/TCP,8080/TCP 3d6h


    This is the detail of the kong service in the RIC VM:

    # kubectl describe service r4-infrastructure-kong-proxy -n ricplt
    Name: r4-infrastructure-kong-proxy
    Namespace: ricplt
    Labels: app.kubernetes.io/instance=r4-infrastructure
    app.kubernetes.io/managed-by=Tiller
    app.kubernetes.io/name=kong
    app.kubernetes.io/version=1.4
    helm.sh/chart=kong-0.36.6
    Annotations: <none>
    Selector: app.kubernetes.io/component=app,app.kubernetes.io/instance=r4-infrastructure,app.kubernetes.io/name=kong
    Type: NodePort
    IP: 10.100.225.53
    Port: kong-proxy 32080/TCP
    TargetPort: 32080/TCP
    NodePort: kong-proxy 32080/TCP
    Endpoints: 10.244.0.26:32080
    Port: kong-proxy-tls 32443/TCP
    TargetPort: 32443/TCP
    NodePort: kong-proxy-tls 32443/TCP
    Endpoints: 10.244.0.26:32443
    Session Affinity: None
    External Traffic Policy: Cluster
    Events: <none>


    BR


    Daniel

  16. Deploy RIC using Recipe failed:

    When running './deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml' it failed with the following message:

    Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz

    Successfully packaged chart and saved it to: /tmp/aux-common-3.0.0.tgz

    Successfully packaged chart and saved it to: /tmp/nonrtric-common-2.0.0.tgz

    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).

    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)

    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).

    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)

    Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).

    You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)

    ****************************************************************************************************************

                                                         ERROR                                                      

    ****************************************************************************************************************

    Can't locate the ric-common helm package in the local repo. Please make sure that it is properly installed.

    ****************************************************************************************************************

    Running `helm init` as suggested doesn't help. Any idea?

Write a comment…