Skip to end of metadata
Go to start of metadata
VM Minimum Requirements for RIC 22


NOTE: sudo access is required for installation

Getting Started PDF

Step 1: Obtaining the Deployment Scripts and Charts

NOTE:  cd to your directory where installation will be, i.e. /home/user

Run ...

$ sudo -i
$ git clone http://gerrit.o-ran-sc.org/r/it/dep -b bronze
$ cd dep
$ git submodule update --init --recursive --remote
Step 2: Generation of cloud-init Script 

This step will generate a script that will set up a one-node Kubernetes cluster for installing the SMO components.  The resultant script can run as the cloud-init script when launching the VM, or run at a root shell after the VM is launched.

Note, because the O-RAN SC Bronze SMO consists components from ONAP Frankfurt release, the infrastructure software stack (Docker, Kubernetes, and Helm) versions are as required by the ONAP deployment.  It is a different combination compared to what the Near RT RIC requires.

Run ...

$ cd tools/k8s/etc
# edit the infra.rc file to specify the versions of docker, kubernetes, and helm.
# For SMO, the versions are as required by the ONAP Frankfurt release.
# Comment out the lines below the heading of "RIC tested"; and un-comment
# the lines following the heading of "ONAP Frankfurt"

$ cd ../bin
$ ./gen-cloud-init.sh   # will generate the stack install script for what RIC needs
# The outputted script is will be used for preparing K8 cluster for RIC to deploy
# to file is  "k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh"

Step 3: Installation of Kubernetes, Helm, Docker, etc.

Run ...

$ ./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh

NOTE: Be patient as this takes some time to complete. Upon completion of this script, the VM will be rebooted.  You will then need to login once again.

$ sudo -i
$ kubectl get pods --all-namespaces  # There should be  9 pods running in kube-system namespace.
Step 4:  Deploy SMO

O-RAN Bronze SMO consists of components from three separate groups: ONAP, Non RT RIC, and RIC-AUX.  It is deployed by running a script under smo/bin.

Run ...

$ cd smo/bin
$ ./install

# If this is the first time SMO is installed on the SMO VM, it suggested to run "./install initlocalrepo" instead.
# The extra argument also initializes the local Helm repo.

# Also if this is the first time SMO is installed on the SMO VM, because it involves the preparation of the ONAP
# helm charts, the instillation may take hours to finish.  It is suggested to run the install as a nohup command
# in background, just in case connection to the VM is interrupted.

# Upon successful deployment, the "kubectl get pods --all-namespaces" command should show 8 pods in nonrtric
# namespace, 27 pods/jobs in onap namespace, and 2 pods in ricaux name space, all in Running or Completed state.
Helpful Hints

Kubectl commads:

kubectl get pods -n nampespace - gets a list of Pods running

kubectl get logs -n namespace name_of_running_pod






Complete these tasks to get started


Recent space activity

Space contributors

{"mode":"list","scope":"descendants","limit":"5","showLastTime":"true","order":"update","contextEntityId":20875208}

  • No labels

159 Comments

  1. hello experts,

    after onap deployment, i have a pod not ready but completed, is it normal? thanks

    dev-sdnc-sdnrdb-init-job-6v525 0/1 Completed 0 18m 10.244.0.159 smobronze <none> <none>

    and is it possible to set the pullPolicy to IfNotPresent for all the charts of onap?

    1. That is okay.  You can see that it is actually a job, not a regular pod.  You may view its status by:

      kubectl get jobs -n onap

      or even: 

      kubectl get jobs -n onap dev-sdnc-sdnrdb-init-job -o json
      1. Anonymous

        Hi Lusheng,

        thanks for quick reply. Now I have smo fully deployed. thanks

        (10:57 dabs@smobronze bin) > sudo kubectl get jobs --all-namespaces
        NAMESPACE NAME COMPLETIONS DURATION AGE
        onap dev-sdnc-sdnrdb-init-job 1/1 34m 74m

  2. Anonymous

    Hi. I am new to O-RAN and trying to deploy SMO as well as RIC. I was successfully able to install SMO with the given steps but while deploying I keep on getting below messages:-

    ===> Deploying OAM (ONAP Lite)
    ======> Deploying ONAP-lite
    Error: unknown command "deploy" for "helm"
    Run 'helm --help' for usage.
    ======> Waiting for ONAP-lite to reach operatoinal state
    0/7 SDNC-SDNR pods and 0/7 Message Router pods running
    0/7 SDNC-SDNR pods and 0/7 Message Router pods running
    0/7 SDNC-SDNR pods and 0/7 Message Router pods running


    None of these pods are getting up and running for like hours, infact none of them are even present. Did I miss any step or do i need to install any component of ONAP before this?

    1. I had once encountered this problem:

      make sure deploy/undeploy plugins exist in /root/.helm:

      dabs@RICBronze:~/oran/dep/smo/bin/smo-deploy/smo-oom$sudo cp -R ./kubernetes/helm/plugins/ /root/.helm

      1. Thanks for prompt response.

        Even after copying the plugins, I am getting same messages. But I am getting initial error message as :-

        ===> Deploying OAM (ONAP Lite)
        ======> Deploying ONAP-lite
        fetching local/onap
        Error: chart "onap" matching version "" not found in local index. (try 'helm repo update'). no chart name found
        mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/charts/*': No such file or directory
        mv: cannot stat '/root/.helm/plugins/deploy/cache/onap-subcharts/common': No such file or directory
        rm: cannot remove '/root/.helm/plugins/deploy/cache/onap/requirements.lock': No such file or directory
        mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/requirements.yaml': No such file or directory
        Error: could not find a ready tiller pod
        release "dev" deployed
        Error: could not find a ready tiller pod
        Error: could not find a ready tiller pod
        ======> Waiting for ONAP-lite to reach operatoinal state
        0/7 SDNC-SDNR pods and 0/7 Message Router pods running
        0/7 SDNC-SDNR pods and 0/7 Message Router pods running


        Seems that the onap charts are not present. I even tried "helm repo update" but no luck. Can you share how to get them?

        1. Chirag,

          Did the step 1 through 3 complete without error for your installation?  It appears that your k8s cluster is not running correctly (i.e. error msg complaining about tiller pod not in ready state).

          Lusheng


        2. Your tiller pod is not ready. 

          1st, make sure the tiller pod is in running state

          2nd, make sure helm service is in running state:

          sudo helm serve

          3rd, make sure 'helm search onap' has 35 charts listed. If not, you can manually issue the 'make all' command:

          dabs@smobronze:~/oran/dep/smo/bin/smo-deploy/smo-oom/kubernetes$ sudo make all


          1. Anonymous

            Zhengwei,

            I executed the instruction, sudo make all,but I got the following error.

            Downloading common from repo http://127.0.0.1:8879/charts
            Save error occurred: could not download http://127.0.0.1:8879/common-6.0.0.tgz: Get http://127.0.0.1:8879/common-6.0.0.tgz: dial tcp 127.0.0.1:8879: connect: connection refused
            Deleting newly downloaded charts, restoring pre-update state
            Error: could not download http://127.0.0.1:8879/common-6.0.0.tgz: Get http://127.0.0.1:8879/common-6.0.0.tgz: dial tcp 127.0.0.1:8879: connect: connection refused
            Makefile:61: recipe for target 'dep-robot' failed
            make[1]: *** [dep-robot] Error 1
            make[1]: Leaving directory '/root/dep/smo/bin/smo-deploy/smo-oom/kubernetes'
            Makefile:46: recipe for target 'robot' failed
            make: *** [robot] Error 2

            How can I do ? Thank you,pls.

            Tim

            1. Anonymous

              Have you performed the 2nd step?

              2nd, make sure helm service is in running state:

              sudo helm serve

              1. Anonymous

                Yes. But  any expected respond should I recieve ?

                When I executed 2nd step, terminal stuck and show this message .

                Regenerating index. This may take a moment.

                Now serving you on 127.0.0.1:8879

                Should I keep waiting?

                Thanks for help.

                1. Anonymous

                  Yes. You can just put 'sudo helm serve' in one terminal, and it's the helm SERVER; and go on with the 'sudo make all' in another terminal.

                  1. Anonymous

                    Wow, Thank you for your help

                    I think my problem has been resolved .

          2. Anonymous

            I did all the 3 steps: still, I am getting the same error


            root@xxxx:~/dep/smo/bin# sudo ./install
            ===> Starting at Tue Sep 8 07:42:45 UTC 2020

            ===> Cleaning up any previous deployment
            ======> Deleting all Helm deployments
            Error: command 'delete' requires a release name
            ======> Clearing out all ONAP deployment resources
            Error from server (NotFound): namespaces "onap" not found
            No resources found.
            error: resource(s) were provided, but no name, label selector, or --all flag specified
            error: resource(s) were provided, but no name, label selector, or --all flag specified
            ======> Clearing out all RICAUX deployment resources
            Error from server (NotFound): namespaces "ricaux" not found
            Error from server (NotFound): namespaces "ricinfra" not found
            ======> Clearing out all NONRTRIC deployment resources
            Error from server (NotFound): namespaces "nonrtric" not found
            ======> Preparing for redeployment
            node/ebtic07 not labeled
            node/ebtic07 not labeled
            node/ebtic07 not labeled
            ======> Preparing working directory

            ===> Deploying OAM (ONAP Lite)
            ======> Deploying ONAP-lite
            Error: unknown command "deploy" for "helm"
            Run 'helm --help' for usage.
            ======> Waiting for ONAP-lite to reach operatoinal state
            0/7 SDNC-SDNR pods and 0/7 Message Router pods running
            0/7 SDNC-SDNR pods and 0/7 Message Router pods running
            0/7 SDNC-SDNR pods and 0/7 Message Router pods running

            1. Anonymous

              Can anyone please help me here. The tiller pods is also up and running


              root@xxx07:~/.helm/plugins/deploy# kubectl get pods --all-namespaces
              NAMESPACE NAME READY STATUS RESTARTS AGE
              kube-system coredns-5d4dd4b4db-972pr 1/1 Running 1 26m
              kube-system coredns-5d4dd4b4db-hrhkn 1/1 Running 1 26m
              kube-system etcd-ebtic07 1/1 Running 1 25m
              kube-system kube-apiserver-ebtic07 1/1 Running 1 25m
              kube-system kube-controller-manager-ebtic07 1/1 Running 1 26m
              kube-system kube-flannel-ds-2z2vx 1/1 Running 1 26m
              kube-system kube-proxy-5shnx 1/1 Running 1 26m
              kube-system kube-scheduler-ebtic07 1/1 Running 1 25m
              kube-system tiller-deploy-666f9c57f4-rl2g9 1/1 Running 1 25m

            2. I had a similar error when I tried to re-install SMO after first installation (~/dep/smo/bin/uninstall && ~/dep/smo/bin/install).

              The only solution I found was to delete all the dep/ folder and restart the installation process from scatch. It worked, but since Onap helm charts had to be gneerated again it took hours.

              1. How did it work?  The ~/dep/smo/bin/install script has an invalid helm command on line 185, from what I can tell:

                helm deploy dev local/onap --namespace onap -f ./override-oam.yaml

                'helm deploy' isn't a supported command.

                1. Indeed, deploy is not a vanilla helm command, it is a plugin developed by ONAP (https://wiki.onap.org/display/DW/OOM+Helm+%28un%29Deploy+plugins).

                  Also, it seems that the re-installation problem may come from the fact that install operations are performed directly after uninstall ones, while some modules may not be fully uninstalled yet.

                  You may try to add a sleep 20 just before the helm deploy command in the install script.

            1. tiller pod is running:

            kube-system coredns-5d4dd4b4db-pvq4m 1/1 Running 1 6d21h
            kube-system coredns-5d4dd4b4db-wwdsb 1/1 Running 1 6d21h
            kube-system etcd-smo 1/1 Running 1 6d21h
            kube-system kube-apiserver-smo 1/1 Running 2 6d21h
            kube-system kube-controller-manager-smo 1/1 Running 14 6d21h
            kube-system kube-flannel-ds-vrgwh 1/1 Running 1 6d21h
            kube-system kube-proxy-cjhdn 1/1 Running 1 6d21h
            kube-system kube-scheduler-smo 1/1 Running 15 6d21h
            kube-system tiller-deploy-6658594489-h6zwt 1/1 Running 1 6d21h


            2. helm server is running:

            tcp 0 0 127.0.0.1:8879 0.0.0.0:* LISTEN 32431/helm


            3."helm search onap" has 35 charts listed:

            root@smo:/home/hongqun/oran/dep/smo/bin/smo-deploy/smo-oom# helm search onap | wc -l
            35


            4. when I run install script, I got the folloing errors:

            ======> Deploying ONAP-lite
            fetching local/onap
            Error: chart "onap" matching version "" not found in local index. (try 'helm repo update'). no chart name found
            mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/charts/*': No such file or directory
            mv: cannot stat '/root/.helm/plugins/deploy/cache/onap-subcharts/common': No such file or directory
            rm: cannot remove '/root/.helm/plugins/deploy/cache/onap/requirements.lock': No such file or directory
            mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/requirements.yaml': No such file or directory
            Error: no Chart.yaml exists in directory "/root/.helm/plugins/deploy/cache/onap"
            release "dev" deployed


            Could you give me some advices? Thank you.

            1. Hi Alex,

              Can you try "helm deploy" command you might get deploy plugin not found. if so you need to do "helm plugin install ~/.helm/plugins/deploy" if deploy is not found then u can download them to ~/.helm/plugins/ from (

              https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/helm/plugins;hb=refs/heads/master

              ) and then try the install command again.


              Thanks,

              Naga chetan

              1. Hi Naga chetan K M

                Thank you. But the deploy and undeploy plugins are found, charts not found in cache folder. 


                root@smo:~/.helm/plugins# tree .
                .
                ├── deploy
                │   ├── cache
                │   │   ├── onap
                │   │   │   ├── computed-overrides.yaml
                │   │   │   └── logs
                │   │   │   ├── dev.log
                │   │   │   └── dev.log.log
                │   │   └── onap-subcharts
                │   ├── deploy.sh
                │   └── plugin.yaml
                └── undeploy
                ├── plugin.yaml
                └── undeploy.sh

                6 directories, 7 files

                Thanks.

                1. Hi Lao,

                  If you are not seeing charts then you can try the below procedure, this is already listed in comments,

                  make sure 'helm search onap' has 35 charts listed. If not, you can manually issue the 'make all' command:

                  dabs@smobronze:~/oran/dep/smo/bin/smo-deploy/smo-oom/kubernetes$ sudo make all

                  Thanks,

                  Naga chetan

            2. Hi, I think SMO didn't deploy correctly first time, so the corresponding folders where not copied to the .helm folder. Try deleting smo-deploy folder and installing it again, like that it will start the installation from scratch. It can take several hours. 

        3. What I found in my case is the version of helm that caused the issue - In step 2, I need to set the version as 2.16.6 in infra.rc and things  work fine. This is evident from the name of the script "

          k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh" (note the h_2_16 in it)
          1. Step 3 was going wrong. Helm was not getting initialized due to proxy server. Installed certificates and got it working.

            Thanks for the help (smile)

        4. Anonymous

          我也遇到这个了则么解决的

  3. Anonymous

    Hello experts ,

    I  followed it step by step but I stuck by step 3 .

    When I Run the script, k8s-1node-cloud-init-k ... .sh, I received the following message. 

    The connection to the server localhost:8080 was refused - did you specify the right host or port? 

    I don't know if I did something wrong or if I missed something I need to pay attention . 

    How can I fix it, Thank you .

    Tim.



    1. Anonymous

      I met exactly the same problem.  Appreciate you for the help!

      1. Anonymous

        can you pls share full debug out when running the script?

        1. Anonymous

          The whole situation as you can see below.

          ++ eval 'kubectl get pods -n kube-system | grep "Running" | wc -l'

          +++ wc -l

          +++ grep Running

          +++ kubectl get pods -n kube-system

          The connection to the server localhost:8080 was refused - did you specify the right host or port?

          + NUMPODS=0

          + echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'

          > waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]

          + '[' 0 -lt 8 ']'

          + sleep 5


          Tim.

          1. Anonymous

            Hi,

            do you have following info in the outputs? The 'localhost:8080 refused' problem generally relates to the KUBECONFIG configuration.

            + cd /root

            + rm -rf .kube

            + mkdir -p .kube

            + cp -i /etc/kubernetes/admin.conf /root/.kube/config

            + chown root:root /root/.kube/config

            + export KUBECONFIG=/root/.kube/config

            + KUBECONFIG=/root/.kube/config

            + echo KUBECONFIG=/root/.kube/config

            + kubectl get pods --all-namespaces

            No resources found.

            + kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

            podsecuritypolicy.policy/psp.flannel.unprivileged created

            clusterrole.rbac.authorization.k8s.io/flannel created

            clusterrolebinding.rbac.authorization.k8s.io/flannel created

            serviceaccount/flannel created

            configmap/kube-flannel-cfg created

            daemonset.apps/kube-flannel-ds-amd64 created

            daemonset.apps/kube-flannel-ds-arm64 created

            daemonset.apps/kube-flannel-ds-arm created

            daemonset.apps/kube-flannel-ds-ppc64le created

            daemonset.apps/kube-flannel-ds-s390x created

            1. Anonymous

              But there is no admin.conf under /etc/kubernetes/

              1. Anonymous

                suggest re-run the script and make sure 'kubeadm init' is successful:

                sudo ./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh

                1. Anonymous

                  'kubeadm init' has the following error:


                  I0818 01:09:09.191921 32088 version.go:248] remote version is much newer: v1.18.8; falling back to: stable-1.15
                  [init] Using Kubernetes version: v1.15.12
                  [preflight] Running pre-flight checks
                  error execution phase preflight: [preflight] Some fatal errors occurred:
                  [ERROR Swap]: running with swap on is not supported. Please disable swap
                  [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

                  1. Anonymous

                    [ERROR Swap]: running with swap on is not supported. Please disable swap

                    disable swap and re-run the script.

                    sudo swapoff -a

                    1. Thank you for kind help!  But after disabling swap and re-running the script, 'kubeadm init' has the new error:


                      I0819 12:54:05.234698 10134 version.go:251] remote version is much newer: v1.18.8; falling back to: stable-1.16
                      [init] Using Kubernetes version: v1.16.14
                      [preflight] Running pre-flight checks
                      [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.6. Latest validated version: 18.09
                      error execution phase preflight: [preflight] Some fatal errors occurred:
                      [ERROR Port-6443]: Port 6443 is in use
                      [ERROR Port-10251]: Port 10251 is in use
                      [ERROR Port-10252]: Port 10252 is in use
                      [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
                      [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
                      [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
                      [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
                      [ERROR Port-10250]: Port 10250 is in use
                      [ERROR Port-2379]: Port 2379 is in use
                      [ERROR Port-2380]: Port 2380 is in use
                      [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
                      [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
                      To see the stack trace of this error execute with --v=5 or higher

                      1. Anonymous

                        it seems that you already have an incomplete k8s cluster, try:

                        turn off swap  and re-run the script, 'sudo ./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh', which will perform 'kubeadm reset' to clear any existing k8s cluster.

                      2. Anonymous

                        Hi Jensen,

                        I am facing the exact same issue. I know that this is after a very long time, but did you figure out the solution for it? your help would be much appreciated.

                        1. Check for errors in the script before the localhost error, in my case it has a docker version error, i had to change the version in the k8s-1node-cloud-init-k_1_15-h_2_17-d_19_03.sh file. 

                          I have this versions in the file: 

                          echo "19.03.6" > /opt/config/docker_version.txt
                          echo "1.15.9" > /opt/config/k8s_version.txt
                          echo "0.7.5" > /opt/config/k8s_cni_version.txt
                          echo "2.17.0" > /opt/config/helm_version.txt


                          Also for this docker version you have to change the ubuntu version to "18.04.02" in the file, if not it won't find the file in the repository: 

                          elif [[ ${UBUNTU_RELEASE} == 18.* ]]; then
                          echo "Installing on Ubuntu $UBUNTU_RELEASE (Bionic Beaver)"
                          if [ ! -z "${DOCKERV}" ]; then
                          DOCKERVERSION="${DOCKERV}-0ubuntu1~18.04.3" #THIS IS THE LINE I MODIFIED
                          1. Hi,

                            If anyone still has this problem after using Javi G combinations. try

                            $ sudo apt-get upgrade

                            $ sudo apt-get update

                            before step 3.

                            1. Anonymous

                              This doesn't work still. Do you have updated scripts for latest versions of docker, helm?

                        2. Anonymous

                          Hii Jensen,


                          Have please have you solved this issue yet..


                          Im having this exact same issue

  4. Anonymous

    Hello there,

    In step-2, when I execute the script, "gen-cloud-init.sh" after editing the 'infra.rc' file, it throws "cannot execute binary file" output as follows:


    ./gen-cloud-init.sh: line 39: source: /home/airvana/dep/tools/k8s/bin/../etc/.infra.rc.swo: cannot execute binary file

    ./gen-cloud-init.sh: line 39: source: /home/airvana/dep/tools/k8s/bin/../etc/.infra.rc.swl: cannot execute binary file

    ./gen-cloud-init.sh: line 39: source: /home/airvana/dep/tools/k8s/bin/../etc/.infra.rc.swp: cannot execute binary file

    ./gen-cloud-init.sh: line 39: source: /home/airvana/dep/tools/k8s/bin/../etc/.infra.rc.swn: cannot execute binary file

    ./gen-cloud-init.sh: line 39: source: /home/airvana/dep/tools/k8s/bin/../etc/.infra.rc.swm: cannot execute binary file


    If I assume that is normal and move on to step-3 to execute "k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh",  It throws the following errors:


    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    + kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    unable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused


    And then following, 


    +++ kubectl get pods -n kube-system
    +++ grep Running
    +++ wc -l
    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    + NUMPODS=0
    + echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
    > waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
    + '[' 0 -lt 8 ']'


    I am a novice in these technologies, your help would be much appreciated.

    Thank you.

    1. Anonymous

      Hi! I have a similar problem,

      error execution phase preflight: docker is required for container runtime: exec: "docker": executable file not found in $PATH

      + cd /root

      + rm -rf .kube

      + mkdir -p .kube

      + cp -i /etc/kubernetes/admin.conf /root/.kube/config

      cp: no se puede efectuar `stat' sobre '/etc/kubernetes/admin.conf': No existe el archivo o el directorio

      + chown root:root /root/.kube/config

      chown: no se puede acceder a '/root/.kube/config': No existe el archivo o el directorio

      + export KUBECONFIG=/root/.kube/config

      + KUBECONFIG=/root/.kube/config

      + echo KUBECONFIG=/root/.kube/config

      + kubectl get pods --all-namespaces

      The connection to the server localhost:8080 was refused - did you specify the right host or port?

      if someone can help me, it would be incredible!

      thanks!

      1. Seems like docker is not getting installed.. Recently, I am facing an issue installing docker.io 18.09.7 over ubuntu 18.04.4... try changing docker version to empty ("") instead of 18.09 in infra.rc file.. It should run fine but not sure if you will face any other issues later on, as it is not tested

        1. Anonymous

          This workaround solves the problem, thank you very much

      2. Anonymous

        Please make sure your VM has more than 2 CPUs, otherwise the docker.io will not be installed at all. I have tried ubuntu 18.04.1 to 18.04.4, the ubuntu version is not the problem. Go with latest docker can help you get the 35 charts, but then you will find the ONAP-lite cannot reach operational state. I feel that we need to wait for an update...

    2. Use helm version 2.17.0, it doesn't give this error anymore. 

      You can change it in the k8s... file in this line: echo "2.17.0" > /opt/config/helm_version.txt

      or in the "infra.rc" file in step 2 and repeat the steps to generate the new k8s file. 

      1. Javi G,

        By using helm version 2.17.0, it solves the following problem?

        "The connection to the server localhost:8080 was refused - did you specify the right host or port?"

        Thank you.

  5. Hi,

    After deploying SMO, how can I open O1 dashboard? I tried opening http://localhost:8181/odlux/index.html but receiving "connection refused". Can anyone please help?

  6. Dear all,

    I am getting the following error:

    Creating /root/.helm/repository/repositories.yaml
    Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
    Error: error initializing: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden

    It seems some URLs may not be valid anymore?

    Any help would be appreciated.

    BR

    Daniel



    1. This error can be fixed by using helm 2.17.0 as indicated by Javi G below.

      BR

      Daniel

  7. Anonymous

    Hi 

    If any of us as the normal users (out of this PL team) can really make this up and running?!???????

    Full of errors and even more errors after checking and applying this cummunity "answers"


    Very disapointed, Sorry but this is true !!!

    1. Yeah a lot of errors but finally got it working... Which errors do you have?

      1. Anonymous

        (sad)

        Its: " waiting for 0/8 pods running in namespaces.....

        ... Sleep 5"


        I think it has been sleft for weeks, not 5 secs

      2. Anonymous

        Good Updates: After >10 times, I finally came up with this combination, and it worked for me:

        • My Ubuntu Desktop: 18.04.2
        • I followed the initial instruction above
        • And I followed Javi G recommendations to go to "k8s..." file to change the below:
          • Use: "19.03.6" for docker: echo "19.03.6" > /opt/config/docker_version.txt

          • Use: "2.17.0" for heml
          • DOCKERVERSION="${DOCKERV}-0ubuntu1~18.04.3 ### (Note that: I entered: 18.04.3 even though my  Ubuntu Desktop is: 18.04.2)

        Thank you Javi G.

        I hope the new comers can make this works, or try to use the above combination to make it work


        1. mhz

          Hi

          I am facing the same problem:

          The connection to the server localhost:8080 was refused - did you specify the right host or port?

          + NUMPODS=0

          + echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'

          > waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]

          + '[' 0 -lt 8 ']'

          + sleep 5


          I have implemented all suggested solutions in the previous comments as well as the approach used in https://blog.csdn.net/jeffyko/article/details/107030704?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522162932197216780261968421%2522%252C%2522scm%2522%253A%252220140713.130102334..%2522%257D&request_id=162932197216780261968421&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduend~default-1-107030704.first_rank_v2_pc_rank_v29&utm_term=O-RAN+notes%282%29&spm=1018.2226.3001.4187.


          But still have the same issue. Could you please help?

          Thanks


  8. Hi,

    I am following deployment steps for SMO. Upon running the k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh script, I come across the following error. Kindly share the solution if anyone has found it.


    E: Version '18.09.7-0ubuntu1~18.04.4' for 'docker.io' was not found
    + cat
    ./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh: line 156: /etc/docker/daemon.json: No such file or directory
    + mkdir -p /etc/systemd/system/docker.service.d
    + systemctl enable docker.service
    Failed to enable unit: Unit file docker.service does not exist.

    1. Change this  in the k8s... file: 

      echo "19.03.6" > /opt/config/docker_version.txt


      Also for this docker version you have to change the ubuntu version to "18.04.02" in the file, if not it won't find the file in the repository: 

      elif [[ ${UBUNTU_RELEASE} == 18.* ]]; then
      echo "Installing on Ubuntu $UBUNTU_RELEASE (Bionic Beaver)"
      if [ ! -z "${DOCKERV}" ]; then
      DOCKERVERSION="${DOCKERV}-0ubuntu1~18.04.3" #THIS IS THE LINE I MODIFIED

      Not every docker version can be found in the repository, so i found that version 19.03.6 for ubuntu 18.04.3 is available, and works well. The 18.04.2 worked before but throws another error now. 

      1. Many thanks that worked.

  9. Hello,

    I have come across the following error while installing SMO.  if anyone has a solution for it, kindly share the same.

    Creating /root/.helm/repository/repositories.yaml
    Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
    Error: error initializing: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden
    + helm init -c
    Creating /root/.helm/repository/repositories.yaml
    Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
    Error: error initializing: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden


    Regards,

    Pavan

    1. I ran into this issue as well and this command solved it:

      helm init --stable-repo-url=https://charts.helm.sh/stable --client-only 

      Source: helm init failed is not a valid chart repository or cannot be reached: Failed to fetch 403 Forbidden - Stack Overflow
      1. Hi Travis, I have tried your command 'helm init --stable-repo-url=https://charts.helm.sh/stable --client-only' but face the same issue.Is it required to rerun everything from the start?

        1. I believe so, yes. If that command still does not work, try another of the commands mentioned in the link I posted. I'm assuming this happened after running the k8 script?

          1. Yes, this happened after running the k8s script. I am trying the second command mentioned in the link, will update.


          2. Hi Travis, did u run this cmd separately or k8s script was modified? If so, kindly share the change made in the  k8s file.

              1. Using helm version 2.17.0 worked. Thank you for your help.

    2. Try using helm version 2.17.0. 

      You can change it in the k8s... file in this line: echo "2.17.0" > /opt/config/helm_version.txt

      or in the "infra.rc" file in step 2 and repeat the steps to generate the new k8s file. 

      1. Thank you Javi, that worked.

  10. Dear all,

    After following the indicated steps (using helm 2.17.0 to avoid an error), the deployment of dev-sdnc fails and I get stuck here:


    ======> Deploying ONAP-lite
    fetching local/onap
    release "dev" deployed
    release "dev-consul" deployed
    release "dev-dmaap" deployed
    release "dev-mariadb-galera" deployed
    release "dev-msb" deployed
    release "dev-sdnc" deployed
    dev-sdnc 1 Sat Jan 9 12:18:06 2021 FAILED sdnc-6.0.0 onap
    ======> Waiting for ONAP-lite to reach operatoinal state
    0/7 SDNC-SDNR pods and 0/7 Message Router pods running
    0/7 SDNC-SDNR pods and 0/7 Message Router pods running
    0/7 SDNC-SDNR pods and 0/7 Message Router pods running

    ...


    Any help would be appreciated.

    BR


    Daniel

    1. Hi Daniel,

      Please can you check whether docker image is getting pulled or not (kubectl describe po <pod name> -n onap), I faced the same issue and saw docker image was not getting pulled because for every 6 hours only 100 pulls are allowed in docker ce and k8s exceeds those many tries sometimes.


      Thanks,

      Naga chetan

      1. Hi Nega,


        There seems to be an error when pulling the image. In fact for many of the onap services.

        I tried restarting the installation process from scracth and it failed again the same place. How did you manage to solve it?


        This is what I get when I list the pods.

        onap dev-consul-68d576d55c-t4cp9 1/1 Running 0 124m
        onap dev-consul-server-0 1/1 Running 0 124m
        onap dev-consul-server-1 1/1 Running 0 123m
        onap dev-consul-server-2 1/1 Running 0 122m
        onap dev-kube2msb-9fc58c48-qlw5q 0/1 Init:ErrImagePull 0 124m
        onap dev-mariadb-galera-0 0/1 Init:ImagePullBackOff 0 124m
        onap dev-message-router-0 0/1 Init:ImagePullBackOff 0 124m
        onap dev-message-router-kafka-0 0/1 Init:ErrImagePull 0 124m
        onap dev-message-router-kafka-1 0/1 Init:ImagePullBackOff 0 124m
        onap dev-message-router-kafka-2 0/1 Init:ImagePullBackOff 0 124m
        onap dev-message-router-zookeeper-0 0/1 Init:ImagePullBackOff 0 124m
        onap dev-message-router-zookeeper-1 0/1 ImagePullBackOff 0 124m
        onap dev-message-router-zookeeper-2 0/1 ErrImagePull 0 124m
        onap dev-msb-consul-65b9697c8b-ldpqp 1/1 Running 0 124m
        onap dev-msb-discovery-54b76c4898-2z7vp 0/2 ErrImagePull 0 124m
        onap dev-msb-eag-76d4b9b9d7-l5m75 0/2 Init:ErrImagePull 0 124m
        onap dev-msb-iag-65c59cb86b-lsjkl 0/2 Init:ErrImagePull 0 124m
        onap dev-sdnc-0 0/2 Init:ErrImagePull 0 124m
        onap dev-sdnc-db-0 0/1 Init:ImagePullBackOff 0 124m
        onap dev-sdnc-dmaap-listener-5c77848759-r2fnp 0/1 Init:ImagePullBackOff 0 124m
        onap dev-sdnc-sdnrdb-init-job-dmq6r 0/1 Init:Error 0 124m
        onap dev-sdnc-sdnrdb-init-job-wlvqp 0/1 Init:ImagePullBackOff 0 102m
        onap dev-sdnrdb-coordinating-only-9b9956fc-fzwbh 0/2 Init:ImagePullBackOff 0 124m
        onap dev-sdnrdb-master-0 0/1 Init:ErrImagePull 0 124m


        Output of kubectl describe po dev-sdnc-0 -n onap:

        ...

        Events:
        Type Reason Age From Message
        ---- ------ ---- ---- -------
        Normal BackOff 23m (x59 over 104m) kubelet, vagrant Back-off pulling image "oomk8s/readiness-check:2.2 .1"
        Warning Failed 15m (x12 over 104m) kubelet, vagrant Error: ErrImagePull
        Warning Failed 13m (x78 over 104m) kubelet, vagrant Error: ImagePullBackOff
        Warning Failed 4m56s (x13 over 104m) kubelet, vagrant Failed to pull image "oomk8s/readiness-check:2.2.1 ": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io: Temporary failure in name resolution

        BR


        Daniel



    2. I had the exact same logs. Then I found out those 'release "xxx" deployed' were not deployed at all. You might want to check deploy logs, which located in ~/.helm/plugin/deploy/cache/*/logs . In my case, it complains something about 'no matches for kind "StatefulSet" in version'. It was solved after I switch K8S version to 1.15.9.

      This is infra.rc works for me:
      INFRA_DOCKER_VERSION=""
      INFRA_HELM_VERSION="2.17.0"
      INFRA_K8S_VERSION="1.15.9"
      INFRA_CNI_VERSION="0.7.5"

      I left docker version blank so I don't have to edit the script as mentioned above. 19.03.6 for ubuntu 18.04.3 is installed at this time.

      1. Thanks Chris. I tried a fresh installation with your infra.rc configuration but still got the same error.

        BR

        Daniel



        1. Hi Daniel,

          The last error says it has name resolution failure, you can fix this by adding "nameserver 8.8.8.8" in /etc/resolv.conf file. 


          Thanks,

          Naga chetan

  11. Hi All,


    Is there a slack group for discussions ?


    Thanks,

    Naga chetan

  12. Hello,

    SMO installation was successful. I have run it on AWS instance. Any idea how one can check out SMO functioning? I mean any curl command or web page that can be tried?

    Pavan

    1. Hi, I'm not sure about that. What I did was to deploy a RIC in another VM and configure it to work with the nonrtric, like that you can know that both nonrtric and ric are working... 

    2. Hi Pavan,

         What you want to check in SMO ? Normally we are using  DMAAP and VES in SMO. That can be curled and checked

  13. Anonymous

    Good Updates: After >10 times, I finally came up with this combination, and it worked for me:

    • My Ubuntu Desktop: 18.04.2
    • I followed the initial instruction above
    • And I followed Javi G recommendations to go to "k8s..." file to change the below:
      • Use: "19.03.6" for docker: echo "19.03.6" > /opt/config/docker_version.txt

      • Use: "2.17.0" for heml
      • DOCKERVERSION="${DOCKERV}-0ubuntu1~18.04.3 ### (Note that: I entered: 18.04.3 even though my  Ubuntu Desktop is: 18.04.2)

    Thank you Javi G.

    I hope the new comers can make this works, or try to use the above combination to make it work

  14. Anonymous

    After Step 1 and Before Step #2 Change the following files

    dep/tools/k8s/etc/infra.rc
    # RIC tested
    INFRA_DOCKER_VERSION="19.03.6"
    INFRA_HELM_VERSION="2.17.0"
    INFRA_K8S_VERSION="1.15.9"
    INFRA_CNI_VERSION="0.7.5"


    dep/tools/k8s/heat/scripts/k8s_vm_install.sh

    .

    .

    .

    .

    .

    .

    .

    elif [[ ${UBUNTU_RELEASE} == 18.* ]]; then
      echo "Installing on Ubuntu $UBUNTU_RELEASE (Bionic Beaver)"
      if [ ! -z "${DOCKERV}" ]; then
        DOCKERVERSION="${DOCKERV}-0ubuntu1~18.04.3"
      fi

    .

    .



    dep/smo/bin/install

    .

    .

    if [ "$1" == "initlocalrepo" ]; then
      echo && echo "===> Initialize local Helm repo"
      rm -rf ~/.helm #&& helm init -c  # without this step helm serve may not work.
      helm init --stable-repo-url=https://charts.helm.sh/stable --client-only
      helm serve &
      helm repo add local http://127.0.0.1:8879
    fi

    .

    .


    Then run Step #2, Step #3 and Step #4 normally.

    1. Anonymous

      OS was 18.04.5 LTS (Bionic Beaver)

      1. Anonymous

        Can anyone merge these changes to github

  15. Anonymous

    Hi,

       I need help on running dcaemod using SMO Lite . Can i know what all other ONAP components should be enabled for running dcaemod ? Any kind of dependency on other modules of ONAP

  16. Anonymous

    I've started to deploy SMO with ubuntu 20.04 desktop but an error ocurrs when try to install smo.

    It's not allow this last ubuntu version?

    1. I have tried with Ubuntu 18.04 and it had worked. Try switching to this version or share the error it reports.

    2. Hi,

      I don't think the bronze version supports ubuntu 20.04.

      switching to 18.04 is a good idea.

  17. Anonymous

    hi every,

    i execute this instruction,kubectl get pods --all-namespaces,

    but i got one place strange, I have tag it into red .

    Could anyone help me to solve this problem?

    NAMESPACE NAME READY STATUS RESTARTS AGE

    kube-system coredns-5d4dd4b4db-82gnh 1/1   Running   2 128m

    kube-system coredns-5d4dd4b4db-8mrqh 1/1   Running   2 128m

    kube-system etcd-smo-virtualbox 1/1   Running   2 127m

    kube-system kube-apiserver-smo-virtualbox 1/1   Running   2 127m

    kube-system kube-controller-manager-smo-virtualbox 1/1   Running   2 127m

    kube-system kube-flannel-ds-9xr8x 1/1   Running   2 128m

    kube-system kube-proxy-lwmnh 1/1 Running 2 128m

    kube-system kube-scheduler-smo-virtualbox 1/1 Running 2 127m

    kube-system tiller-deploy-7c54c6988b-jgrqg 1/1 Running 2 127m

    nonrtric a1-sim-osc-0 1/1 Running 0 16m

    nonrtric a1-sim-osc-1 1/1 Running 0 4m47s

    nonrtric a1-sim-std-0 1/1 Running 0 16m

    nonrtric a1-sim-std-1 1/1 Running 0 5m6s

    nonrtric a1-sim-std2-0 1/1 Running 0 16m

    nonrtric a1-sim-std2-1 1/1 Running 0 15m

    nonrtric a1controller-64c4f59fb5-tdllp 1/1 Running 0 16m

    nonrtric controlpanel-78f957844b-nb5qc 1/1 Running 0 16m

    nonrtric db-549ff9b4d5-58rmc 1/1 Running 0 16m

    nonrtric enrichmentservice-7559b45fd-2wb7p 1/1 Running 0 16m

    nonrtric nonrtricgateway-6478f59b66-wjjkz 1/1 Running 0 16m

    nonrtric policymanagementservice-79f8f76d8f-ch46v 1/1 Running 0 16m

    nonrtric rappcatalogueservice-5945c4d84b-7w7cc 1/1 Running 0 16m

    onap dev-consul-68d576d55c-l2n2q 1/1 Running 0 53m

    onap dev-consul-server-0 1/1 Running 0 53m

    onap dev-consul-server-1 1/1 Running 0 52m

    onap dev-consul-server-2 1/1 Running 0 52m

    onap dev-kube2msb-9fc58c48-s5g98 1/1 Running 0 52m

    onap dev-mariadb-galera-0 1/1 Running 0 53m

    onap dev-mariadb-galera-1 1/1 Running 0 46m

    onap dev-mariadb-galera-2 1/1 Running 0 39m

    onap dev-message-router-0 1/1 Running 0 53m

    onap dev-message-router-kafka-0 1/1 Running 0 53m

    onap dev-message-router-kafka-1 1/1 Running 0 53m

    onap dev-message-router-kafka-2 1/1 Running 0 53m

    onap dev-message-router-zookeeper-0 1/1 Running 0 53m

    onap dev-message-router-zookeeper-1 1/1 Running 0 53m

    onap dev-message-router-zookeeper-2 1/1 Running 0 53m

    onap dev-msb-consul-65b9697c8b-6hhck 1/1 Running 0 52m

    onap dev-msb-discovery-54b76c4898-xz7rq 2/2 Running 0 52m

    onap dev-msb-eag-76d4b9b9d7-jfms9 2/2 Running 0 52m

    onap dev-msb-iag-65c59cb86b-864jr 2/2 Running 0 52m

    onap dev-sdnc-0 2/2 Running 0 52m

    onap dev-sdnc-db-0 1/1 Running 0 52m

    onap dev-sdnc-dmaap-listener-5c77848759-svmzg 1/1 Running 0 52m

    onap dev-sdnc-sdnrdb-init-job-d6rjm 0/1 Completed 0 41m

    onap dev-sdnc-sdnrdb-init-job-qkdl8 0/1 Init:Error 0 52m

    onap dev-sdnrdb-coordinating-only-9b9956fc-zjqkg 2/2 Running 0 52m

    onap dev-sdnrdb-master-0 1/1 Running 0 52m

    onap dev-sdnrdb-master-1 1/1 Running 0 40m

    onap dev-sdnrdb-master-2 1/1 Running 0 38m

    ricaux bronze-infra-kong-68657d8dfd-d5bb4 2/2 Running 1 4m57s

    ricaux deployment-ricaux-ves-65db844758-td2zc 1/1 Running 0 4m47s


    BR,

    Tim

  18. Anonymous

    There seems to be a mismatch in the VM minimum requirements on this page and the PDF linked in that section. The RAM required is mentioned as 32 GB and the CPU cores as 8; however, the "Getting Started PDF" mentions the required RAM as 16 GB and CPU cores as 4. So how much minimum RAM and CPU cores does the SMO VM actually require? Also, if these are the minimum requirements, what are the standard or recommended VM requirements? I suspect that the Getting Started PDF for Near Realtime RIC installation has been mistakenly linked here. Could someone confirm this?


    Thanks,

    Vikas Krishnan

  19. Hello guys,

    In step 3, i have this problem:  


    ++ eval 'kubectl get pods -n kube-system | grep "Running" | wc -l'

    +++ wc -l

    +++ grep Running

    +++ kubectl get pods -n kube-system

    The connection to the server localhost:8080 was refused - did you specify the right host or port?

    + NUMPODS=0

    + echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'

    > waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]

    + '[' 0 -lt 8 ']'

    + sleep 5


    I tried to solve this problem by reading the comments related to this subject but this issue still remains...
    My Ubuntu version is 18.04.1 LTS and i follow the previous steps.

    Can someone help please?!

    Rafael

    1. I just fiddled around with infra.rc and it eventually worked for me  This is what I had:


      #INFRA_DOCKER_VERSION=""

      #INFRA_K8S_VERSION="1.15.9"

      #INFRA_CNI_VERSION="0.7.5"

      #INFRA_HELM_VERSION="2.17.0"


  20. Hi All

    I am trying to install cherry release. However compared to bronze I don't see "dep/smo/bin" folder anymore. So I can't run the install command. Can anyone suggest alternative? All steps till step-3 is successful, only step-4 is not valid for cherry release.

    1. Hello Anjan

      How can you install the cherry release? 

      In step 1 you just made this change (cherry) in the following command?

      $ git clone http://gerrit.o-ran-sc.org/r/it/dep -b cherry


      SMO installation was successful with the bronze release... so i can't asnwer you question i am sorry.

      Rafael

      1. Yes, that's exactly I did, I also changed infra.rc file. I am also able to run RIC related services on a separate machine, but its SMO which is not getting installed. What are the steps to upgrade from bronze to cherry release?

        1. Here is how i resolved the issue.

          unlike Bronze, Cherry doesn't have a single script (dep/smo/bin/install) which install onap, nonrtric & ric-aux. Instead they needed to be installed individually.

          1. How did you resolve the issue? Could you elaborate what you did to install the onap, nonrtric & rix-aux individually?

            1. Cherry installation process differs compared to bronze in many ways as below, however it still follows many steps as bronze


              1. Use below clone command
              git clone http://gerrit.o-ran-sc.org/r/it/dep -b cherry

              2. Update “dep/tools/k8s/etc/infra.rc”
              incorporating below versions.
              INFRA_DOCKER_VERSION=""
              INFRA_K8S_VERSION="1.15.9"
              INFRA_CNI_VERSION="0.7.5"
              INFRA_HELM_VERSION="2.17.0"
              3. Update Ubuntu Version in “dep/tools/k8s/bin/k8s-1node-cloud-init-k_1_15-h_2_17-d_cur.sh” as
              elif [[ ${UBUNTU_RELEASE} == 18.* ]]; then
              echo "Installing on Ubuntu $UBUNTU_RELEASE (Bionic Beaver)"
              if [ ! -z "${DOCKERV}" ]; then
              DOCKERVERSION="${DOCKERV}-0ubuntu1~18.04.3"
              fi

              Note: Unlike Bronze, in Cherry there is no comprehensive installation file “dep/smo/bin/install” which install all SMO components i.e. ONAP , NONRTRIC, RIC-AUX & RIC-INFRA; instead they need to be manually installed in the order of
              ONAP -> NONRTRIC -> RIC-AUX.

              4. Install ONAP first, this would take a very long time, up to 6-7 hours, don’t give up!
              Uncomment all commented sections in file "dep/tools/onap/install"
              cd dep/tools/onap/
              ./install initlocalrepo

              Its quite possible that the execution would get stuck in below line
              4/7 SDNC-SDNR pods and 7/7 Message Router pods running


              In such situation stop the execution and proceed to manually perform the remaining steps in the scripts which are basically below steps. Otherwise please ignore step-5,6 which are automatically part of install script


              5. Install non-RT-RIC
              a.
              Update “baseUrl” in “dep/nonrtric/RECIPE_EXAMPLE/example_recipe.yaml”
              b.
              cd dep/bin
              c.
              ./deploy-nonrtric -f ~/dep/nonrtric/RECIPE_EXAMPLE/example_recipe.yaml
              d.
              Wait until “kubectl get pods -n nonrtric” all pods are running

              6.
              Install ric-aux & ric-infra

              kubectl create ns ricinfra
              cd ~/dep/ric-aux/helm/infrastructure
              helm dep update
              cd ../
              helm install -f ~/dep/RECIPE_EXAMPLE/AUX/example_recipe.yaml
              --name cherry-infra --namespace ricaux ./infrastructure
              cd ves/
              helm dep update
              cd ..
              helm install -f ~/dep/RECIPE_EXAMPLE/AUX/example_recipe.yaml
              --name cherry-ves --namespace ricaux ./ves
              Wait until ric-aux & ric-infra pods are up & running, verify using
              “kubectl get pods --all-namespaces”

              1. Thanks a lot for the guidance! The installation was a breeze following the steps you mentioned.

              2. Anonymous

                Hi, Is it possible to use another orchestrator instead of ONAP? (In step 4 in this instruction.)

                thank you

              3. Oh Anjan, This was smooth for me. Did the simulation on couple of VMs. It took 6 hours but worked well.

                Information on some URLs that we can use to test, or some services that we can expose to see if all is well, will complete this set of instructions.

                Thank You.

                Vineeth

              4. Hi Team,

                While using above cherry installation. I am facing some issue while checking with below 

                "kubectl get pods --all-namespaces"

                kube-system coredns-5d4dd4b4db-fbj8d 1/1 Running 1 3d
                kube-system coredns-5d4dd4b4db-mwrld 1/1 Running 1 3d
                kube-system etcd-ubuntu 1/1 Running 3 3d
                kube-system kube-apiserver-ubuntu 1/1 Running 2 3d
                kube-system kube-controller-manager-ubuntu 1/1 Running 1 3d
                kube-system kube-flannel-ds-rpqjt 1/1 Running 1 3d
                kube-system kube-proxy-dnkbw 1/1 Running 1 3d
                kube-system kube-scheduler-ubuntu 1/1 Running 1 3d
                kube-system tiller-deploy-7c54c6988b-x4tkm 1/1 Running 1 3d
                nonrtric a1-sim-osc-0 0/1 ImagePullBackOff 0 2d20h
                nonrtric a1-sim-std-0 0/1 ErrImagePull 0 2d20h
                nonrtric a1controller-64c4f59fb5-dbt4q 0/1 ImagePullBackOff 0 2d20h
                nonrtric controlpanel-5ff4849bf-fp5bm 0/1 ImagePullBackOff 0 2d20h
                nonrtric db-549ff9b4d5-rqbjp 1/1 Running 0 2d20h
                nonrtric enrichmentservice-8578855744-hqxww 0/1 ImagePullBackOff 0 2d20h
                nonrtric policymanagementservice-77c59f6cfb-lwrzw 0/1 ImagePullBackOff 0 2d20h
                nonrtric rappcatalogueservice-5945c4d84b-2z875 0/1 ImagePullBackOff 0 2d20h
                onap dev-consul-68d576d55c-wdnz6 1/1 Running 0 2d21h
                onap dev-consul-server-0 1/1 Running 0 2d21h
                onap dev-consul-server-1 1/1 Running 0 2d19h
                onap dev-consul-server-2 1/1 Running 0 2d19h
                onap dev-kube2msb-9fc58c48-vrckm 0/1 Init:0/1 405 2d21h
                onap dev-mariadb-galera-0 0/1 Init:ImagePullBackOff 0 2d21h
                onap dev-message-router-0 0/1 Init:0/1 406 2d21h
                onap dev-message-router-kafka-0 0/1 Init:1/4 405 2d21h
                onap dev-message-router-kafka-1 0/1 Init:1/4 405 2d21h
                onap dev-message-router-kafka-2 0/1 Init:1/4 405 2d21h
                onap dev-message-router-zookeeper-0 0/1 ImagePullBackOff 0 2d21h
                onap dev-message-router-zookeeper-1 0/1 ImagePullBackOff 0 2d21h
                onap dev-message-router-zookeeper-2 0/1 ImagePullBackOff 0 2d21h
                onap dev-msb-consul-65b9697c8b-zdktr 1/1 Running 0 2d21h
                onap dev-msb-discovery-54b76c4898-mrl2j 1/2 ImagePullBackOff 0 2d21h
                onap dev-msb-eag-76d4b9b9d7-fqcff 0/2 Init:0/1 406 2d21h
                onap dev-msb-iag-65c59cb86b-5ql7w 0/2 Init:0/1 405 2d21h
                onap dev-sdnc-0 0/2 Init:1/3 405 2d21h
                onap dev-sdnc-db-0 0/1 Init:ImagePullBackOff 0 2d21h
                onap dev-sdnc-dmaap-listener-5c77848759-qfd8f 0/1 Init:1/2 405 2d21h
                onap dev-sdnc-sdnrdb-init-job-5xjfv 0/1 Init:Error 0 2d21h
                onap dev-sdnc-sdnrdb-init-job-drp5m 0/1 ImagePullBackOff 0 2d21h
                onap dev-sdnrdb-coordinating-only-9b9956fc-dglbh 2/2 Running 0 2d21h
                onap dev-sdnrdb-master-0 1/1 Running 0 2d21h
                onap dev-sdnrdb-master-1 1/1 Running 0 2d21h
                onap dev-sdnrdb-master-2 1/1 Running 0 2d21h
                ricaux cherry-infra-kong-7d6555954f-b56v2 2/2 Running 19 2d20h
                ricaux deployment-ricaux-ves-6c8db5db65-kfltw 0/1 ImagePullBackOff 0 2d20h

              5. Hi Team,

                While using above cherry installation. I am facing some issue while checking with below.

                Some Pods are getting stuck in imagepullback. Our Vm having internet access as well 

                "kubectl get pods --all-namespaces"

                kube-system coredns-5d4dd4b4db-fbj8d 1/1 Running 1 3d
                kube-system coredns-5d4dd4b4db-mwrld 1/1 Running 1 3d
                kube-system etcd-ubuntu 1/1 Running 3 3d
                kube-system kube-apiserver-ubuntu 1/1 Running 2 3d
                kube-system kube-controller-manager-ubuntu 1/1 Running 1 3d
                kube-system kube-flannel-ds-rpqjt 1/1 Running 1 3d
                kube-system kube-proxy-dnkbw 1/1 Running 1 3d
                kube-system kube-scheduler-ubuntu 1/1 Running 1 3d
                kube-system tiller-deploy-7c54c6988b-x4tkm 1/1 Running 1 3d
                nonrtric a1-sim-osc-0 0/1 ImagePullBackOff 0 2d20h
                nonrtric a1-sim-std-0 0/1 ErrImagePull 0 2d20h
                nonrtric a1controller-64c4f59fb5-dbt4q 0/1 ImagePullBackOff 0 2d20h
                nonrtric controlpanel-5ff4849bf-fp5bm 0/1 ImagePullBackOff 0 2d20h
                nonrtric db-549ff9b4d5-rqbjp 1/1 Running 0 2d20h
                nonrtric enrichmentservice-8578855744-hqxww 0/1 ImagePullBackOff 0 2d20h
                nonrtric policymanagementservice-77c59f6cfb-lwrzw 0/1 ImagePullBackOff 0 2d20h
                nonrtric rappcatalogueservice-5945c4d84b-2z875 0/1 ImagePullBackOff 0 2d20h
                onap dev-consul-68d576d55c-wdnz6 1/1 Running 0 2d21h
                onap dev-consul-server-0 1/1 Running 0 2d21h
                onap dev-consul-server-1 1/1 Running 0 2d19h
                onap dev-consul-server-2 1/1 Running 0 2d19h
                onap dev-kube2msb-9fc58c48-vrckm 0/1 Init:0/1 405 2d21h
                onap dev-mariadb-galera-0 0/1 Init:ImagePullBackOff 0 2d21h
                onap dev-message-router-0 0/1 Init:0/1 406 2d21h
                onap dev-message-router-kafka-0 0/1 Init:1/4 405 2d21h
                onap dev-message-router-kafka-1 0/1 Init:1/4 405 2d21h
                onap dev-message-router-kafka-2 0/1 Init:1/4 405 2d21h
                onap dev-message-router-zookeeper-0 0/1 ImagePullBackOff 0 2d21h
                onap dev-message-router-zookeeper-1 0/1 ImagePullBackOff 0 2d21h
                onap dev-message-router-zookeeper-2 0/1 ImagePullBackOff 0 2d21h
                onap dev-msb-consul-65b9697c8b-zdktr 1/1 Running 0 2d21h
                onap dev-msb-discovery-54b76c4898-mrl2j 1/2 ImagePullBackOff 0 2d21h
                onap dev-msb-eag-76d4b9b9d7-fqcff 0/2 Init:0/1 406 2d21h
                onap dev-msb-iag-65c59cb86b-5ql7w 0/2 Init:0/1 405 2d21h
                onap dev-sdnc-0 0/2 Init:1/3 405 2d21h
                onap dev-sdnc-db-0 0/1 Init:ImagePullBackOff 0 2d21h
                onap dev-sdnc-dmaap-listener-5c77848759-qfd8f 0/1 Init:1/2 405 2d21h
                onap dev-sdnc-sdnrdb-init-job-5xjfv 0/1 Init:Error 0 2d21h
                onap dev-sdnc-sdnrdb-init-job-drp5m 0/1 ImagePullBackOff 0 2d21h
                onap dev-sdnrdb-coordinating-only-9b9956fc-dglbh 2/2 Running 0 2d21h
                onap dev-sdnrdb-master-0 1/1 Running 0 2d21h
                onap dev-sdnrdb-master-1 1/1 Running 0 2d21h
                onap dev-sdnrdb-master-2 1/1 Running 0 2d21h
                ricaux cherry-infra-kong-7d6555954f-b56v2 2/2 Running 19 2d20h
                ricaux deployment-ricaux-ves-6c8db5db65-kfltw 0/1 ImagePullBackOff 0 2d20h

              6. Anonymous

                Hi Anjan,

                in step 5 a. Update “baseUrl” in “dep/nonrtric/RECIPE_EXAMPLE/example_recipe.yaml”

                What should I update in the example_reciple.yaml ? 

                 I am using Cherry release and example_reciple.yaml for nonrtric seems to little different. 


              7. Hi Anjan Goswami,

                I am getting error in step 4, i.e. ./install initlocalrepo. I ran this command after un-commenting sections in dep/tools/onap/install.

                The errors are the repetition of following lines:-


                0/1 A1 controller pods, 0/1 ControlPanel pods,

                0/1 DB pods, 0/1 PolicyManagementService pods,

                and 0/4 A1 sim pods running

                No resources found.

                No resources found.

                No resources found.

                No resources found.

                No resources found.


                Do you know how to fix this issue??? Anyone knows, please help me out here.

                My ubuntu version is 18.04.6, #CPU= 4.

              8. Anonymous

                Hi 

                Kindly tell what kind of changes in baseurl you mentioned in example_recipe.yaml ??

  21. Seemly, the mismatch between k8s version , docker version and helm version is causing many issues. Could anyone update the latest load line? I'm using Ubuntu server 18.04.05 and not able to done the steps successfully. 

  22. Can the SMO be deployed on AWS EKS or on Google Kubernetes Engine? Has someone already tried this?

  23. Hi Experts,


    While running the install script getting below error

    error: resource(s) were provided, but no name, label selector, or --all flag specified
    error: resource(s) were provided, but no name, label selector, or --all flag specified
    ======> Clearing out all RICAUX deployment resources
    Error from server (NotFound): namespaces "ricaux" not found
    Error from server (NotFound): namespaces "ricinfra" not found
    ======> Clearing out all NONRTRIC deployment resources
    Error from server (NotFound): namespaces "nonrtric" not found
    ======> Preparing for redeployment


    1) Also tiller pod is running 

    kube-system tiller-deploy-7d7bc87bb-fvr6v 1/1 Running 1 23h


    2) helm search onap | wc -l' has 35 charts listed after manually 'make all' command.


    Please next action plan to solve the problem.


    Br

    Deepak




  24. Hello,


    I am having issues with nonrtric pods/ This is an output of errors in install script:


    ===> Deploy NONRTRIIC

    ...

    Error: open /root/.helm/repository/local/index.yaml: no such file or directory

    + cp /tmp/nonrtric-common-2.0.0.tgz /root/.helm/repository/local/

    + helm repo index /root/.helm/repository/local/

    + helm repo remove local

    Error: no repo named "local" found
    + helm repo add local http://127.0.0.1:8879/charts
    "local" has been added to your repositories

    ...

    Downloading nonrtric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    Successfully packaged chart and saved it to: /tmp/nonrtricgateway-1.0.0.tgz
    No requirements found in /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric/charts.
    Error: unknown flag: --force-update
    Finished Packaging NONRTRIC components [controlpanel a1controller a1simulator policymanagementservice enrichmentservice rappcatalogueservice nonrtricgateway]

    ...

    Downloading nonrtric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    Successfully packaged chart and saved it to: /tmp/nonrtricgateway-1.0.0.tgz
    No requirements found in /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric/charts.
    Error: unknown flag: --force-update
    Finished Packaging NONRTRIC components [controlpanel a1controller a1simulator policymanagementservice enrichmentservice rappcatalogueservice nonrtricgateway]
    namespace/nonrtric created
    configmap/nonrtric-recipe created
    Deploying NONRTRIC
    helm install -f /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml --namespace nonrtric --name r2-dev-nonrtric /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/bin/../nonrtric/helm/nonrtric
    Error: release r2-dev-nonrtric failed: no objects visited
    ======> Waiting for NONRTRIC to reach operatoinal state


    example_recipe.yaml and

    When trying to run the last line manually this is what i get:


    yev@yev:/media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC$ sudo helm install -f /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml --namespace nonrtric --name r2-dev-nonrtric /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/bin/../nonrtric/helm/nonrtric
    Error: a release named r2-dev-nonrtric already exists.
    Run: helm ls --all r2-dev-nonrtric; to check the status of the release
    Or run: helm del --purge r2-dev-nonrtric; to delete it
    yev@yev:/media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC$ sudo helm ls --all r2-dev-nonrtric
    NAME           	REVISION	UPDATED                 	STATUS	CHART         	APP VERSION	NAMESPACE
    r2-dev-nonrtric	1       	Wed Jun  9 15:43:42 2021	FAILED	nonrtric-2.0.0	           	nonrtric


    purging and restarting again doesnt help


    I made a folder in /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric called charts and copied Chart.yaml and values.yaml from nonrtric into it. Then i tried running the install script and i got:


    ======> Deploying ONAP-lite

    fetching local/onap

    Error: chart "onap" matching version "" not found in local index. (try 'helm repo update'). no chart name found

    mv: cannot stat '/home/yev/.helm/plugins/deploy/cache/onap/charts/*': No such file or directory

    mv: cannot stat '/home/yev/.helm/plugins/deploy/cache/onap-subcharts/common': No such file or directory

    rm: cannot remove '/home/yev/.helm/plugins/deploy/cache/onap/requirements.lock': No such file or directory

    mv: cannot stat '/home/yev/.helm/plugins/deploy/cache/onap/requirements.yaml': No such file or directory

    Error: no Chart.yaml exists in directory "/home/yev/.helm/plugins/deploy/cache/onap"

    release "dev" deployed


    So I had to run sudo make all in /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-oom/kubernetes

    and I made sure that my copied charts do exist afterwards. Then I ran the install script again and I saw a different error:


    Successfully packaged chart and saved it to: /tmp/nonrtricgateway-1.0.0.tgz
    Error: error unpacking values.yaml in nonrtric: chart metadata (Chart.yaml) missing
    Error: unknown flag: --force-update
    Finished Packaging NONRTRIC components [controlpanel a1controller a1simulator policymanagementservice enrichmentservice rappcatalogueservice nonrtricgateway]
    namespace/nonrtric created
    configmap/nonrtric-recipe created
    Deploying NONRTRIC
    helm install -f /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml --namespace nonrtric --name r2-dev-nonrtric /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/bin/../nonrtric/helm/nonrtric
    Error: error unpacking Chart.yaml in nonrtric: chart metadata (Chart.yaml) missing
    



    Chart.yaml::

    apiVersion: v1
    description: NonRealTime RAN Intelligent Controller
    name: nonrtric
    version: 2.0.0
    
    dependencies:
      - name: a1controller
        version: ~2.0.0
        repository: "@local"
        condition: nonrtric.installA1controller
    

    #here are the other dependecies which all follow the same pattern except


      - name: nonrtric-common
        version: ^2.0.0# which uses ^ instead of ~ here
        repository: "@local"
        condition: true
    

    Could anyone please help?
  25. Hi Experts,


    I faced some issues, ricaux bronzeinfra kong is cant be Running


    ricaux bronze-infra-kong-68657d8dfd-285kf 0/2 ImagePullBackOff 0 11s

    ricaux deployment-ricaux-ves-65db844758-zwp8f 1/1 Running 0 17m


    the logs says:

    Error from server (BadRequest): a container name must be specified for pod bronze-infra-kong-68657d8dfd-285kf, choose one of: [ingress-controller proxy]


    i have tried to check it out  using


    kubectl logs -n ricaux bronze-infra-kong-68657d8dfd-285kf -c ingress-controller

    and it says

    Error from server (BadRequest): container "ingress-controller" in pod "bronze-infra-kong-68657d8dfd-285kf" is waiting to start: trying and failing to pull image


    Could anyone please help?


    Br,

    Junior

    1. Same error i am getting for cherry release as well. Can some help an update.


      Hi Team,

      While using above cherry installation. I am facing some issue while checking with below 

      "kubectl get pods --all-namespaces"

      kube-system coredns-5d4dd4b4db-fbj8d 1/1 Running 1 3d
      kube-system coredns-5d4dd4b4db-mwrld 1/1 Running 1 3d
      kube-system etcd-ubuntu 1/1 Running 3 3d
      kube-system kube-apiserver-ubuntu 1/1 Running 2 3d
      kube-system kube-controller-manager-ubuntu 1/1 Running 1 3d
      kube-system kube-flannel-ds-rpqjt 1/1 Running 1 3d
      kube-system kube-proxy-dnkbw 1/1 Running 1 3d
      kube-system kube-scheduler-ubuntu 1/1 Running 1 3d
      kube-system tiller-deploy-7c54c6988b-x4tkm 1/1 Running 1 3d
      nonrtric a1-sim-osc-0 0/1 ImagePullBackOff 0 2d20h
      nonrtric a1-sim-std-0 0/1 ErrImagePull 0 2d20h
      nonrtric a1controller-64c4f59fb5-dbt4q 0/1 ImagePullBackOff 0 2d20h
      nonrtric controlpanel-5ff4849bf-fp5bm 0/1 ImagePullBackOff 0 2d20h
      nonrtric db-549ff9b4d5-rqbjp 1/1 Running 0 2d20h
      nonrtric enrichmentservice-8578855744-hqxww 0/1 ImagePullBackOff 0 2d20h
      nonrtric policymanagementservice-77c59f6cfb-lwrzw 0/1 ImagePullBackOff 0 2d20h
      nonrtric rappcatalogueservice-5945c4d84b-2z875 0/1 ImagePullBackOff 0 2d20h
      onap dev-consul-68d576d55c-wdnz6 1/1 Running 0 2d21h
      onap dev-consul-server-0 1/1 Running 0 2d21h
      onap dev-consul-server-1 1/1 Running 0 2d19h
      onap dev-consul-server-2 1/1 Running 0 2d19h
      onap dev-kube2msb-9fc58c48-vrckm 0/1 Init:0/1 405 2d21h
      onap dev-mariadb-galera-0 0/1 Init:ImagePullBackOff 0 2d21h
      onap dev-message-router-0 0/1 Init:0/1 406 2d21h
      onap dev-message-router-kafka-0 0/1 Init:1/4 405 2d21h
      onap dev-message-router-kafka-1 0/1 Init:1/4 405 2d21h
      onap dev-message-router-kafka-2 0/1 Init:1/4 405 2d21h
      onap dev-message-router-zookeeper-0 0/1 ImagePullBackOff 0 2d21h
      onap dev-message-router-zookeeper-1 0/1 ImagePullBackOff 0 2d21h
      onap dev-message-router-zookeeper-2 0/1 ImagePullBackOff 0 2d21h
      onap dev-msb-consul-65b9697c8b-zdktr 1/1 Running 0 2d21h
      onap dev-msb-discovery-54b76c4898-mrl2j 1/2 ImagePullBackOff 0 2d21h
      onap dev-msb-eag-76d4b9b9d7-fqcff 0/2 Init:0/1 406 2d21h
      onap dev-msb-iag-65c59cb86b-5ql7w 0/2 Init:0/1 405 2d21h
      onap dev-sdnc-0 0/2 Init:1/3 405 2d21h
      onap dev-sdnc-db-0 0/1 Init:ImagePullBackOff 0 2d21h
      onap dev-sdnc-dmaap-listener-5c77848759-qfd8f 0/1 Init:1/2 405 2d21h
      onap dev-sdnc-sdnrdb-init-job-5xjfv 0/1 Init:Error 0 2d21h
      onap dev-sdnc-sdnrdb-init-job-drp5m 0/1 ImagePullBackOff 0 2d21h
      onap dev-sdnrdb-coordinating-only-9b9956fc-dglbh 2/2 Running 0 2d21h
      onap dev-sdnrdb-master-0 1/1 Running 0 2d21h
      onap dev-sdnrdb-master-1 1/1 Running 0 2d21h
      onap dev-sdnrdb-master-2 1/1 Running 0 2d21h
      ricaux cherry-infra-kong-7d6555954f-b56v2 2/2 Running 19 2d20h
      ricaux deployment-ricaux-ves-6c8db5db65-kfltw 0/1 ImagePullBackOff 0 2d20h

    2. mhz

      I am facing the same issue. Did you manage to resolve it?

  26. Anonymous

    Hi Team,

    One of the deployment is not successful and also some pods are not coming up .. Kindly feedback us

    NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
    cherry-infra 1 Fri Jun 18 23:01:28 2021 DEPLOYED infrastructure-3.0.0 1.0 ricaux
    cherry-ves 1 Fri Jun 18 23:03:00 2021 DEPLOYED ves-1.1.1 1.0 ricaux
    dev 1 Fri Jun 18 22:02:05 2021 DEPLOYED onap-6.0.0 El Alto onap
    dev-consul 1 Fri Jun 18 22:02:06 2021 DEPLOYED consul-6.0.0 onap
    dev-dmaap 1 Fri Jun 18 22:02:07 2021 DEPLOYED dmaap-6.0.0 onap
    dev-mariadb-galera 1 Fri Jun 18 22:02:09 2021 DEPLOYED mariadb-galera-6.0.0 onap
    dev-msb 1 Fri Jun 18 22:02:11 2021 DEPLOYED msb-6.0.0 onap
    dev-sdnc 1 Fri Jun 18 22:02:13 2021 FAILED sdnc-6.0.0 onap
    r2-dev-nonrtric 1 Fri Jun 18 22:54:37 2021 DEPLOYED nonrtric-2.0.0 nonrtric
    root@ubuntu:~# kubectl get pod -A
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-5d4dd4b4db-fbj8d 1/1 Running 2 12d
    kube-system coredns-5d4dd4b4db-mwrld 1/1 Running 2 12d
    kube-system etcd-ubuntu 1/1 Running 4 12d
    kube-system kube-apiserver-ubuntu 1/1 Running 3 12d
    kube-system kube-controller-manager-ubuntu 1/1 Running 2 12d
    kube-system kube-flannel-ds-rpqjt 1/1 Running 2 12d
    kube-system kube-proxy-dnkbw 1/1 Running 2 12d
    kube-system kube-scheduler-ubuntu 1/1 Running 2 12d
    kube-system tiller-deploy-7c54c6988b-x4tkm 1/1 Running 2 12d
    nonrtric a1-sim-osc-0 1/1 Running 0 11d
    nonrtric a1-sim-osc-1 1/1 Running 0 8d
    nonrtric a1-sim-std-0 1/1 Running 0 11d
    nonrtric a1-sim-std-1 1/1 Running 0 8d
    nonrtric a1controller-64c4f59fb5-dbt4q 1/1 Running 0 11d
    nonrtric controlpanel-5ff4849bf-fp5bm 1/1 Running 0 11d
    nonrtric db-549ff9b4d5-rqbjp 1/1 Running 1 11d
    nonrtric enrichmentservice-8578855744-hqxww 1/1 Running 0 11d
    nonrtric policymanagementservice-77c59f6cfb-lwrzw 1/1 Running 0 11d
    nonrtric rappcatalogueservice-5945c4d84b-2z875 1/1 Running 0 11d
    onap dev-consul-68d576d55c-wdnz6 1/1 Running 1 11d
    onap dev-consul-server-0 1/1 Running 1 11d
    onap dev-consul-server-1 1/1 Running 1 11d
    onap dev-consul-server-2 1/1 Running 1 11d
    onap dev-kube2msb-9b5dd678d-7hcg8 1/1 Running 0 5d
    onap dev-mariadb-galera-0 0/1 Init:CrashLoopBackOff 1391 4d22h
    onap dev-message-router-0 0/1 Init:0/1 119 20h
    onap dev-message-router-kafka-0 1/1 Running 0 4d21h
    onap dev-message-router-kafka-1 1/1 Running 0 4d21h
    onap dev-message-router-kafka-2 1/1 Running 0 4d21h
    onap dev-message-router-zookeeper-0 1/1 Running 0 4d22h
    onap dev-message-router-zookeeper-1 1/1 Running 0 4d22h
    onap dev-message-router-zookeeper-2 1/1 Running 0 4d22h
    onap dev-msb-consul-65b9697c8b-zdktr 1/1 Running 1 11d
    onap dev-msb-discovery-5db774ff9b-mkqnw 2/2 Running 0 5d
    onap dev-msb-eag-5b94b8d7d9-tptkf 2/2 Running 0 5d
    onap dev-msb-iag-79fd98f784-9rhcx 2/2 Running 0 4d23h
    onap dev-sdnc-0 0/2 Init:1/3 696 4d21h
    onap dev-sdnc-db-0 0/1 Init:CrashLoopBackOff 1380 4d21h
    onap dev-sdnc-dmaap-listener-57977f7b9-n6mvr 0/1 Init:1/2 696 4d21h
    onap dev-sdnc-dmaap-listener-5c77848759-8fbmb 0/1 Init:1/2 694 4d21h
    onap dev-sdnc-sdnrdb-init-job-5xjfv 0/1 Init:Error 0 11d
    onap dev-sdnc-sdnrdb-init-job-7z86s 0/1 ImagePullBackOff 0 4d21h
    onap dev-sdnrdb-coordinating-only-9b9956fc-dglbh 2/2 Running 2 11d
    onap dev-sdnrdb-master-0 1/1 Running 1 11d
    onap dev-sdnrdb-master-1 1/1 Running 1 11d
    onap dev-sdnrdb-master-2 1/1 Running 1 11d
    ricaux cherry-infra-kong-7d6555954f-b56v2 2/2 Running 23 11d
    ricaux deployment-ricaux-ves-6c8db5db65-kfltw 1/1 Running 0 11d
     

  27. Anonymous

    Hi Team,

    The a1-sim-std2-0 in nonrtric namespace is not coming up.


    The logs presents this message:

    Version folder for simulator: STD_2.0.0
    APIPATH set to: /usr/src/app/api/STD_2.0.0
    PYTHONPATH set to: /usr/src/app/src/common
    src/start.sh: line 38: cd: STD_2.0.0: No such file or directory
    Path to callBack.py: /usr/src/app/src
    Path to main.py: /usr/src/app/src
    python: can't open file 'main.py': [Errno 2] No such file or directory
    python: can't open file 'callBack.py': [Errno 2] No such file or directory
    root@ip-172-31-36-165:~# kubectl get nos
    error: the server doesn't have a resource type "nos"


    This pod is using the image "nexus3.o-ran-sc.org:10002/o-ran-sc/a1-simulator:2.0.0". Below the pod describe:

    Name:           a1-sim-std2-0
    Namespace:      nonrtric
    Priority:       0
    Node:           ip-172-31-36-165/172.31.36.165
    Start Time:     Tue, 29 Jun 2021 15:21:19 +0000
    Labels:         app=nonrtric-a1simulator
                    controller-revision-hash=a1-sim-std2-6bd765b6fc
                    release=r2-dev-nonrtric
                    statefulset.kubernetes.io/pod-name=a1-sim-std2-0
    Annotations:    <none>
    Status:         Running
    IP:             10.244.0.41
    Controlled By:  StatefulSet/a1-sim-std2
    Containers:
      container-nonrtric-a1simulator:
        Container ID:   docker://a92c27cfc80e1b1ad07679d28c2ad1f7955613247bded80653f45ba780f37eed
        Image:          nexus3.o-ran-sc.org:10002/o-ran-sc/a1-simulator:2.0.0
        Image ID:       docker-pullable://nexus3.o-ran-sc.org:10002/o-ran-sc/a1-simulator@sha256:23efade4c7811a44d6e9762545ad9498680efca95d3c8e2321cb694c4fb719a9
        Ports:          8085/TCP, 8185/TCP
        Host Ports:     0/TCP, 0/TCP
        State:          Waiting
          Reason:       CrashLoopBackOff
        Last State:     Terminated
          Reason:       Error
          Exit Code:    2
          Started:      Wed, 30 Jun 2021 08:48:45 +0000
          Finished:     Wed, 30 Jun 2021 08:48:45 +0000
        Ready:          False
        Restart Count:  200
        Liveness:       tcp-socket :8085 delay=20s timeout=1s period=10s #success=1 #failure=3
        Readiness:      tcp-socket :8085 delay=20s timeout=1s period=10s #success=1 #failure=3
        Environment:
          A1_VERSION:  STD_2.0.0
          ALLOW_HTTP:  true
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-t96nt (ro)
    Conditions:
      Type              Status
      Initialized       True 
      Ready             False 
      ContainersReady   False 
      PodScheduled      True 
    Volumes:
      default-token-t96nt:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  default-token-t96nt
        Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  <none>
    Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                     node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
      Type     Reason   Age                     From                       Message
      ----     ------   ----                    ----                       -------
      Warning  BackOff  2m26s (x4745 over 16h)  kubelet, ip-172-31-36-165  Back-off restarting failed container


    Please could anyone help here?


    Regards.


  28. Hello Experts,

      After running "./install initlocalrepo" there are 9 "kube-system" pods runnings, 26 "onap" pods running, 1 "onap" job completed, 3 "onap" jobs "Init:Error".  Not sure if the error jobs matter, since there is 1 completed job?

      In any case there are none of any other pods.  The console output shows the following:

    ...

    Packaging NONRTRIC component [nonrtricgateway]
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    ...Successfully got an update from the "stable" chart repository
    Update Complete.
    Saving 1 charts
    Downloading nonrtric-common from repo http://127.0.0.1:8879/charts
    Deleting outdated charts
    Successfully packaged chart and saved it to: /tmp/nonrtricgateway-1.0.0.tgz
    No requirements found in /root/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric/charts.
    Error: unknown flag: --force-update
    Finished Packaging NONRTRIC components [controlpanel a1controller a1simulator policymanagementservice enrichmentservice rappcatalogueservice nonrtricgateway]
    Chart name-
    namespace/nonrtric created
    Install Kong- false
    configmap/nonrtric-recipe created
    Deploying NONRTRIC
    helm install -f /root/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml --namespace nonrtric --name r3-dev-nonrtric /root/dep/smo/bin/smo-deploy/smo-dep/bin/../nonrtric/helm/nonrtric
    Error: render error in "nonrtric/templates/pv2.yaml": template: nonrtric/templates/pv2.yaml:23:16: executing "nonrtric/templates/pv2.yaml" at <include "common.namespace.nonrtric" .>: error calling include: template: no template "common.namespace.nonrtric" associated with template "gotpl"
    ======> Waiting for NONRTRIC to reach operatoinal state
    No resources found.
    No resources found.
    No resources found.
    No resources found.
    No resources found.
    0/1 A1Controller pods, 0/1 ControlPanel pods,
    0/1 DB pods, 0/1 PolicyManagementService pods,
    and 0/4 A1Sim pods running
    No resources found.
    No resources found.
    No resources found.
    No resources found.
    No resources found.

    ...

    This waiting just loops forever.  Please advise.  Thank you

    1. This error is due to the missing requirements.yaml in the directory /root/dep/smo/bin/smo-deploy/smo-dep/bin/../nonrtric/helm/nonrtric:

      requirements.yaml
      dependencies:
        - name: a1controller
          version: ~2.0.0
          repository: "@local"
          condition: nonrtric.installA1controller
      
        - name: a1simulator
          version: ~2.0.0
          repository: "@local"
          condition: nonrtric.installA1simulator
      
        - name: controlpanel
          version: ~2.0.0
          repository: "@local"
          condition: nonrtric.installControlpanel
      
        - name: policymanagementservice
          version: ~2.0.0
          repository: "@local"
          condition: nonrtric.installPms
      
        - name: enrichmentservice
          version: ~1.0.0
          repository: "@local"
          condition: nonrtric.installEnrichmentservice
      
        - name: nonrtric-common
          version: ^2.0.0
          repository: "@local"
          condition: true
      
        - name: rappcatalogueservice
          version: ~1.0.0
          repository: "@local"
          condition: nonrtric.installRappcatalogueservice
      
        - name: nonrtricgateway
          version: ~1.0.0
          repository: "@local"
          condition: nonrtric.installNonrtricgateway
      1. Concerning the Bronze install, I used the contents of this requirements.yaml above, however encountered the same error.  What changes need to be made to the file /root/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml?  Thanks

        ...

        2290 Deploying NONRTRIC
        2291 helm install -f /root/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml --namespace nonrtric --name r3-dev-nonrtric /root/dep/smo/bin/smo-deploy/smo-dep /bin/../nonrtric/helm/nonrtric
        2292 Error: render error in "nonrtric/templates/pv2.yaml": template: nonrtric/templates/pv2.yaml:23:16: executing "nonrtric/templates/pv2.yaml" at <include "common.namespace.nonrtric " .>: error calling include: template: no template "common.namespace.nonrtric" associated with template "gotpl"
        2293 ======> Waiting for NONRTRIC to reach operatoinal state

        ...

        1. Confirm you have the below line in your dep/smo/bin/smo-deploy/smo-dep/nonrtric/RECIPE_EXAMPLE/example_recipe.yaml:


          common:
            releasePrefix: r3-dev-nonrtric
          # Do not change the namespace
            namespace:
              nonrtric: nonrtric
          1. Above content confirmed for example_recipe.yaml file.

            Any other suggestions of how to resolve the error mentioned above?  Thanks

            Error: render error in "nonrtric/templates/pv2.yaml": template: nonrtric/templates/pv2.yaml:23:16: executing "nonrtric/templates/pv2.yaml" at <include "common.namespace.nonrtric " .>: error calling include: template: no template "common.namespace.nonrtric" associated with template "gotpl"

            1. Are you executing the script from inside the directory dep/smo/bin/smo-deploy/smo-dep/bin and running this command?

              ./deploy-nonrtric -f ../RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml


              1. I ran the deploy-nonrtric command above as you suggested.  Here is the last part of the output, where it errors + a new "already exists" error.  Please advise?  Thanks

                ...
                Successfully packaged chart and saved it to: /tmp/nonrtricgateway-1.0.0.tgz
                No requirements found in /root/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric/charts.
                Error: unknown flag: --force-update
                Finished Packaging NONRTRIC components [controlpanel a1controller a1simulator policymanagementservice enrichmentservice rappcatalogueservice nonrtricgateway]
                Chart name-
                Install Kong- false
                Error from server (AlreadyExists): configmaps "nonrtric-recipe" already exists
                Deploying NONRTRIC
                helm install -f ../RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml --namespace nonrtric --name r3-dev-nonrtric /root/dep/smo/bin/smo-deploy/smo-dep/bin/../nonrtric/helm/nonrtric
                Error: render error in "nonrtric/templates/pv2.yaml": template: nonrtric/templates/pv2.yaml:23:16: executing "nonrtric/templates/pv2.yaml" at <include "common.namespace.nonrtric" .>: error calling include: template: no template "common.namespace.nonrtric" associated with template "gotpl"

                1. The error is stating there's no requirements file did you see if there's a requirement.yaml in the directory /root/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric

                  if not create the file with the following content:


                  requirements.yaml
                  dependencies:
                    - name: nonrtric-common
                      version: ^2.0.0
                      repository: "@local"
                   
                    - name: a1controller
                      version: ~2.0.0
                      repository: "@local"
                      condition: nonrtric.installA1controller
                  
                    - name: a1simulator
                      version: ~2.0.0
                      repository: "@local"
                      condition: nonrtric.installA1simulator
                  
                    - name: controlpanel
                      version: ~2.0.0
                      repository: "@local"
                      condition: nonrtric.installControlpanel
                  
                    - name: policymanagementservice
                      version: ~2.0.0
                      repository: "@local"
                      condition: nonrtric.installPms
                  
                    - name: enrichmentservice
                      version: ~1.0.0
                      repository: "@local"
                      condition: nonrtric.installEnrichmentservice
                  
                    - name: rappcatalogueservice
                      version: ~1.0.0
                      repository: "@local"
                      condition: nonrtric.installRappcatalogueservice
                  
                    - name: nonrtricgateway
                      version: ~1.0.0
                      repository: "@local"
                      condition: nonrtric.installNonrtricgateway
                  1. In the bronze dep/smo/bin/install script it appears that the following git repo is missing the requirements.yaml file in the norrtric directory above that you mentioned.

                    git clone http://gerrit.o-ran-sc.org/r/it/dep smo-dep

                    1. It does missing it! I needed to created it to make it work for me.

  29. Anonymous

    There is actually a requirements.yaml file.  So what do the differences really mean? Here is the contents:

    dependencies:
    - name: a1controller
    version: ~2.0.0
    repository: "@local"
    - name: a1simulator
    version: ~2.0.0
    repository: "@local"
    - name: controlpanel
    version: ~2.0.0
    repository: "@local"
    - name: policymanagementservice
    version: ~2.0.0
    repository: "@local"
    - name: nonrtric-common
    version: ^2.0.0
    repository: "@local"

    1. Anonymous

      So in this case check if you're using the recipe.yaml file

      1. Anonymous

        So in this case check if you're using the correct recipe.yaml file

  30. I'm facing the same error than Junior Salemand Deepak Duttwith pod "cherry-infra-kong" trying to install Cherry release

    Error from server (BadRequest): container "ingress-controller" in pod "cherry-infra-kong-7d6555954f-thh7j" is waiting to start: trying and failing to pull image

    Did someone already solved this issue?
    1. I finally solved this error by replacing the docker image of the Ingress Controller in ~/dep/ric-aux/helm/infrastructure/subcharts/kong/values.yml

      since Kong Ingress Controller docker image move from bintray to docker hub:


      # -----------------------------------------------------------------------------
      # Ingress Controller parameters
      # -----------------------------------------------------------------------------

      # Kong Ingress Controller's primary purpose is to satisfy Ingress resources
      # created in k8s. It uses CRDs for more fine grained control over routing and
      # for Kong specific configuration.
      ingressController:
      enabled: true
      image:
      repository: kong/kubernetes-ingress-controller
      tag: 0.7.0
  31. Anonymous

    Hello everyone i have a problem:

    ++ eval 'kubectl get pods -n kube-system | grep "Running" | wc -l'
    +++ kubectl get pods -n kube-system
    +++ grep Running
    +++ wc -l
    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    + NUMPODS=0
    + echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
    > waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
    + '[' 0 -lt 8 ']'
    + sleep 5

    this thing happened after i do this steps from comment:

    After Step 1 and Before Step #2 Change the following files

    dep/tools/k8s/etc/infra.rc


    # RIC tested
    INFRA_DOCKER_VERSION="19.03.6"
    INFRA_HELM_VERSION="2.17.0"
    INFRA_K8S_VERSION="1.15.9"
    INFRA_CNI_VERSION="0.7.5"


    dep/tools/k8s/heat/scripts/k8s_vm_install.sh

    .

    .

    .

    .

    .

    .

    .

    elif [[ ${UBUNTU_RELEASE} == 18.* ]]; then
      echo "Installing on Ubuntu $UBUNTU_RELEASE (Bionic Beaver)"
      if [ ! -z "${DOCKERV}" ]; then
        DOCKERVERSION="${DOCKERV}-0ubuntu1~18.04.3"
      fi

    .

    .



    dep/smo/bin/install

    .

    .

    if [ "$1" == "initlocalrepo" ]; then
      echo && echo "===> Initialize local Helm repo"
      rm -rf ~/.helm #&& helm init -c  # without this step helm serve may not work.
      helm init --stable-repo-url=https://charts.helm.sh/stable --client-only
      helm serve &
      helm repo add local http://127.0.0.1:8879
    fi

    .

    .


    if anyone have solution it will be very helpful for me

    Thanks

    1. mhz

      I am facing the same problem. Did you find a solution?

      1. Anonymous

    2. It's possible that either docker are not properly installed on package 19.03.6-0ubuntu1~18.04.3 or the docker services are running with errors 


  32. Hi, 
    im wondering, should SMO and NR-RIC should be on separate VMs or it doesn't matter (or AUX VM is dedicated for that)?

    apologizes for such a basic question.

    1. Kamil.

      Yes, I used seperate machine. 

      when I used same machine, faced  some troubles.

      I'm begginer too. good luck! 

  33. mhz

    Hi all,

    I have deployed SMO for the first time, the process was going well for quite a time, but suddenly got stuck in the step below, showing no action and no error.



    [onap]
    make[1]: Entering directory '/root/dep/smo/bin/smo-deploy/smo-oom/kubernetes'
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "local" chart repository
    ...Successfully got an update from the "stable" chart repository
    Update Complete.
    Saving 34 charts
    Downloading aaf from repo http://127.0.0.1:8879
    Downloading aai from repo http://127.0.0.1:8879
    Downloading appc from repo http://127.0.0.1:8879
    Downloading cassandra from repo http://127.0.0.1:8879
    Downloading cds from repo http://127.0.0.1:8879
    Downloading clamp from repo http://127.0.0.1:8879
    Downloading cli from repo http://127.0.0.1:8879
    Downloading common from repo http://127.0.0.1:8879
    Downloading consul from repo http://127.0.0.1:8879
    Downloading contrib from repo http://127.0.0.1:8879
    Downloading dcaegen2 from repo http://127.0.0.1:8879
    Downloading dcaemod from repo http://127.0.0.1:8879
    Downloading dmaap from repo http://127.0.0.1:8879
    Downloading esr from repo http://127.0.0.1:8879
    Downloading log from repo http://127.0.0.1:8879
    Downloading sniro-emulator from repo http://127.0.0.1:8879
    Downloading mariadb-galera from repo http://127.0.0.1:8879
    Downloading msb from repo http://127.0.0.1:8879
    Downloading multicloud from repo http://127.0.0.1:8879
    Downloading nbi from repo http://127.0.0.1:8879
    Downloading pnda from repo http://127.0.0.1:8879
    Downloading policy from repo http://127.0.0.1:8879
    Downloading pomba from repo http://127.0.0.1:8879
    Downloading portal from repo http://127.0.0.1:8879
    Downloading oof from repo http://127.0.0.1:8879
    Downloading robot from repo http://127.0.0.1:8879
    Downloading sdc from repo http://127.0.0.1:8879
    Downloading sdnc from repo http://127.0.0.1:8879
    Downloading so from repo http://127.0.0.1:8879
    Downloading uui from repo http://127.0.0.1:8879
    Downloading vfc from repo http://127.0.0.1:8879
    Downloading vid from repo http://127.0.0.1:8879
    Downloading vnfsdk from repo http://127.0.0.1:8879
    Downloading modeling from repo http://127.0.0.1:8879
    Deleting outdated charts


    And nothing is happening here. What could  be the problem?

    Thanks



    1. You just need to add extra param (SKIP_LINT=TRUE) to the make cmd in 'install' shell script:

      make -e SKIP_LINT=TRUE

      It will skip helm linting stage and allow to continue script execution. 

  34. Hi,

    Does anyone know CNI configuration which is required for this exercise as my kubelet services are not started probably ? Any help is much appreciated thanks.

    Step 3: Installation of Kubernetes, Helm, Docker, etc.

    Run ...

    $ ./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh

    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [kubelet-check] Initial timeout of 40s passed.

    Unfortunately, an error has occurred:
    timed out waiting for the condition

    This error is likely caused by:
    - The kubelet is not running
    - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
    - 'systemctl status kubelet'
    - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
    Here is one example how you may list all Kubernetes containers running in docker:
    - 'docker ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    - 'docker logs CONTAINERID'
    error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
    To see the stack trace of this error execute with --v=5 or higher


    $ systemctl status kubelet

    kubelet.service - kubelet: The Kubernetes Node Agent
    Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
    └─10-kubeadm.conf
    Active: active (running) since Tue 2021-08-24 01:10:22 EDT; 2h 29min ago
    Docs: https://kubernetes.io/docs/home/
    Main PID: 10034 (kubelet)
    Tasks: 21 (limit: 1112)
    CGroup: /system.slice/kubelet.service
    └─10034 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup

    Aug 24 03:39:36 aaronchu-ubuntu1804-libvirt2 kubelet[10034]: E0824 03:39:36.824206 10034 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNo
    Aug 24 03:39:40 aaronchu-ubuntu1804-libvirt2 kubelet[10034]: W0824 03:39:40.667054 10034 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
    Aug 24 03:39:41 aaronchu-ubuntu1804-libvirt2 kubelet[10034]: E0824 03:39:41.838700 10034 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNo
    Aug 24 03:39:42 aaronchu-ubuntu1804-libvirt2 kubelet[10034]: E0824 03:39:42.096513 10034 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied na
    Aug 24 03:39:45 aaronchu-ubuntu1804-libvirt2 kubelet[10034]: W0824 03:39:45.667522 10034 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d

  35. mhz

    Hi all,

    One of the pods is not running.

    --------------

    ricaux bronze-infra-kong-68657d8dfd-4n54w 1/2 ImagePullBackOff 3 46h

    ricaux deployment-ricaux-ves-65db844758-p2rhp 1/1 Running 2 46h

    ------------

    I tried the kubectl describe pod and it shows the following:

    --------------

    Events:
    Type                 Reason                      Age                                     From Message
    ---- ------ ---- ---- -------
    Normal             BackOff                   19m (x257 over 78m)    kubelet, mhz-virtualbox Back-off pulling image "kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.7.0"
    Warning            Unhealthy              14m                                     kubelet, mhz-virtualbox Liveness probe failed: Get http://10.244.0.114:9542/status: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    Warning            Failed                      8m47s (x290 over 78m) kubelet, mhz-virtualbox Error: ImagePullBackOff
    Normal              SandboxChanged 80s                                      kubelet, mhz-virtualbox Pod sandbox changed, it will be killed and re-created.
    Normal               Created                 72s                                       kubelet, mhz-virtualbox Created container proxy
    Normal                Pulled                   72s                                       kubelet, mhz-virtualbox Container image "kong:1.4" already present on machine
    Normal               Started                   68s                                      kubelet, mhz-virtualbox Started container proxy
    Warning              Unhealthy             58s                                      kubelet, mhz-virtualbox Readiness probe failed: Get http://10.244.0.137:9542/status: dial tcp 10.244.0.137:9542: connect: connection refused
    Warning              Unhealthy             54s                                      kubelet, mhz-virtualbox Liveness probe failed: Get http://10.244.0.137:9542/status: dial tcp 10.244.0.137:9542: connect: connection refused
    Warning              Unhealthy             43s                                      kubelet, mhz-virtualbox Readiness probe failed: Get http://10.244.0.137:9542/status: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
    Warning              Failed                     18s (x3 over 72s)             kubelet, mhz-virtualbox Error: ErrImagePull
    Warning              Failed                     18s (x3 over 72s)             kubelet, mhz-virtualbox Failed to pull image "kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.7.0": rpc error: code = Unknown desc = Error response from daemon: error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>403 Forbidden</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>403 Forbidden</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"


    How do you think I can troubleshoot the ImagePullBackOff image?

    Thank you

  36. Hi,

    To install SMO cherry release we make use of the below command

    git clone http://gerrit.o-ran-sc.org/r/it/dep -b cherry

    Similarly, to install SMO dawn release I tried the below command
    git clone http://gerrit.o-ran-sc.org/r/it/dep -b dawn

    This gave an error as follows,
    Cloning into 'dep'...
    fatal: Remote branch dawn not found in upstream origin

    When I checked in their gerrit page (https://gerrit.o-ran-sc.org/r/admin/repos/it/dep,branches),
    dawn release was not available

    How dawn release of SMO can be installed?

    Is there any other repository for the installation?


    Thank you

  37. I am a new comer. After ran,

    ./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh

    I have noticed that the script seems not trying to install docker. By the time it needs docker, the system throws error. Could somebody tell me if I need to manually install docker beforehand?


    Thanks,

    Harry  

    1. You don't need to manually pre-install Docker. As a temporary workaround, you can leave the INFRA_DOCKER_VERSION parameter in infra.rc blank. This will allow you to pass through this step successfully.

  38. Hi Aleksey,

    Yes, you are right. It seems the docker version caused the docker not installed. After leave it blank, docker is installed, but the Kubernetes seems not create a networking container such as flannel so that the coredns containers failed to come up and Kubernetes only launched 7 containers instead of 8.

    Regards,

    Harry


  39. Hi Aleksey,

    The Flannel failed to launch is because my local firewall blocked the connection. Thank you for your help.


    Regards,

    Harry


  40. Hi Everybody,

    I am installing bronze. It stuck at,

    0/1 A1Controller pods, 1/1 ControlPanel pods,
    1/1 DB pods, 1/1 PolicyManagementService pods,
    and 0/4 A1Sim pods running

    Could anybody please help?


    Thanks,



    1. Anonymous

      did you solve the problem, bro?

      e.churikov

      1. Anonymous

        Not, yet. I have been busy on other things. How about you. Have you installed it?


        1. Anonymous

          I'm trying to find a solution... So far, everything is sad hehe.

          Stable:
          "0/1 A1Controller pods, 1/1 ControlPanel pods,
          1/1 DB pods, 1/1 PolicyManagementService pods,
          and 0/4 A1Sim pods running"

          =(

          e.churikov

          1. Please provide output of:

            kubectl describe pod [a1controllerpod]  

            Same for A1 sim pods, thanks

            1. $ kubectl describe pod a1controllerpod
              Error from server (NotFound): pods "a1controllerpod" not found

              $ kubectl get pods --all-namespaces
              NAMESPACE NAME READY STATUS RESTARTS AGE
              kube-system coredns-5d4dd4b4db-tmq74 1/1 Running 1 94m
              kube-system coredns-5d4dd4b4db-w7lmg 1/1 Running 1 94m
              kube-system etcd-edikpc 1/1 Running 1 93m
              kube-system kube-apiserver-edikpc 1/1 Running 1 93m
              kube-system kube-controller-manager-edikpc 1/1 Running 1 93m
              kube-system kube-flannel-ds-bjqpw 1/1 Running 1 94m
              kube-system kube-proxy-fzkxl 1/1 Running 1 94m
              kube-system kube-scheduler-edikpc 1/1 Running 1 93m
              kube-system tiller-deploy-7c54c6988b-fl9fj 1/1 Running 1 92m
              nonrtric controlpanel-7dc756b959-mcpc8 1/1 Running 0 4m39s
              nonrtric enrichmentservice-0 1/1 Running 0 4m39s
              nonrtric nonrtricgateway-7bcf7fdbf9-zsplx 1/1 Running 0 4m39s
              nonrtric policymanagementservice-0 1/1 Running 0 4m39s
              onap dev-consul-68d576d55c-kxvsw 1/1 Running 0 12m
              onap dev-consul-server-0 1/1 Running 0 12m
              onap dev-consul-server-1 1/1 Running 0 11m
              onap dev-consul-server-2 1/1 Running 0 11m
              onap dev-kube2msb-9fc58c48-zsfr6 1/1 Running 0 12m
              onap dev-mariadb-galera-0 1/1 Running 0 12m
              onap dev-mariadb-galera-1 1/1 Running 0 11m
              onap dev-mariadb-galera-2 1/1 Running 0 10m
              onap dev-message-router-0 1/1 Running 0 12m
              onap dev-message-router-kafka-0 1/1 Running 1 12m
              onap dev-message-router-kafka-1 1/1 Running 1 12m
              onap dev-message-router-kafka-2 1/1 Running 1 12m
              onap dev-message-router-zookeeper-0 1/1 Running 0 12m
              onap dev-message-router-zookeeper-1 1/1 Running 0 12m
              onap dev-message-router-zookeeper-2 1/1 Running 0 12m
              onap dev-msb-consul-65b9697c8b-s4pkj 1/1 Running 0 12m
              onap dev-msb-discovery-54b76c4898-ps7qh 2/2 Running 0 12m
              onap dev-msb-eag-76d4b9b9d7-g7c69 2/2 Running 0 12m
              onap dev-msb-iag-65c59cb86b-x2phm 2/2 Running 0 12m
              onap dev-sdnc-0 2/2 Running 0 12m
              onap dev-sdnc-db-0 1/1 Running 0 12m
              onap dev-sdnc-dmaap-listener-5c77848759-5jqdf 1/1 Running 0 12m
              onap dev-sdnc-sdnrdb-init-job-6rctk 0/1 Completed 0 12m
              onap dev-sdnrdb-coordinating-only-9b9956fc-s7wsp 2/2 Running 0 12m
              onap dev-sdnrdb-master-0 1/1 Running 0 12m
              onap dev-sdnrdb-master-1 1/1 Running 0 11m
              onap dev-sdnrdb-master-2 1/1 Running 0 10m


              sorry if I misunderstood

        2. I was able to solve the problem!

          need example_recipe put true in A1controller & A1simulator.

          thanks Alexey Suev for the tip!

  41. Dear all,
    i'm looking for solution to change default routes for Control Panel.

    in my case, control panel try to use remote address (vpn) instead of VM/Container address.

    "Http failure response for http://10.254.185.67:30091/a1-policy/v2/policy-types: 502 Bad Gateway"

    → i guess that there should be one of this ip
    root@smo:~/dep/bin# kubectl get services -n ricaux
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    cherry-infra-kong-proxy NodePort 10.108.153.144 <none> 80:32080/TCP,443:32443/TCP 2d22h
    ric-entry                          ClusterIP  10.102.52.174   <none> 80/TCP,443/TCP 2d22h
    service-ricaux-ves-http   ClusterIP  10.98.92.60       <none> 8080/TCP,8443/TCP 2d22h

    1. On the installation recipe  you can control the ingress IPs:

      # ricip should be the ingress controller listening IP for the platform cluster
      # auxip should be the ingress controller listening IP for the AUX cluster
      extsvcaux:
      ricip: "10.0.0.1"
      auxip: "10.0.0.1"

      1. Federico Rossi
        i think that i don't explain my problem deep enough, sorry.

        The problem is with the SMO policy Control Panel webpage.
        im connecting to remote server with RIC/SMO with SSH with floating vm ip.
        on webrowser host, i use floating ip and dedicated port address to access Control Panel : 10.254.185.67:30091
        but Control Panel should use local server addreesses, 10.0.2.101 or kong address (that i'm not sure), to get policy data
        instead of using the one i using to conenct through browser.
        "Http failure response for http://10.254.185.67:30091/a1-policy/v2/policy-types: 502 Bad Gateway" - error returned by Control Panel webpage.

        so, to sum up, Control Panel webpage try to use floating vm address to get policy data instead of local server adresses and the result is fail.

        1. It's still not fully clear to me but maybe this helps. 30091 map to 8080 that is the nginx running on the controlpanel. 

          If you look at the configuration:  kubectl get cm controlpanel-configmap -n nonrtric -o yaml

          Backend is actually forwarding the requests to the nonrtgateway on port 9090. So control panel is not using the kong ingress in this case.

                  upstream backend {
                      server  nonrtricgateway:9090;
                  }
          
                  server {
                      listen 8080;
                      server_name localhost;
                      root /usr/share/nginx/html;
                      index index.html;
                      location /a1-policy/ {
                          proxy_pass  http://backend;
                      }
                       location /data-producer/ {
                          proxy_pass  http://backend;
                      }
                      location /data-consumer/ {
                          proxy_pass  http://backend;
                      }
          
          


          Now if you look at the ​nonrtgateway config oc get configmap nonrtricgateway-configmap -n nonrtric -o yaml

          You can see how the requests are routed:


                    routes:
                    - id: A1-Policy
                      uri: https://policymanagementservice:9081
                      predicates:
                      - Path=/a1-policy/**
                    - id: A1-EI
                      uri: https://enrichmentservice:9083
                      predicates:
                      - Path=/data-producer/**,/data-consumer/**
          
          


          Your request  http://10.254.185.67:30091/a1-policy/v2/policy-types is actually calling the policymanagentservice you can try and query directly the policymanagement and see if it works. Maybe the 500 error comes from there.

          Not the answer you were looking for but should give you a little more details.  You can change the configmaps and restart the pods to apply a different config.

  42. Hi,

     I am trying to install smo on VM with ubuntu 18.04.6 & 4 CPU:-

    As per the comment of Javi G and Chirag Gandhi , I have updated my k8s...file to have:-

    INFRA_DOCKER_VERSION=""
    INFRA_K8S_VERSION="1.15.9"
    INFRA_CNI_VERSION="0.7.5"
    INFRA_HELM_VERSION="2.17.0"

    But while running the command "./k8s..."

    Up to 7/8 pods completed starting from 0.

    Error:-

    +wget https://storage.googleapis.com/kubernetes-helm/helm-v2.17.0-linux-amd64.tar.gz

    Resolving storage.google.apis.com 

    Connecting to storage.googleapis.com

    HTTP request sent, awaiting response ... 403 Forbidden


    +cd /root

    +rm -rf Helm 

    +mkdir Helm

    +cd Helm

    tar:  ../helm-v2.17.0-linux-amd64.tar.gz: Cannot Open: No such file or directory

    tar: Error is not recoverable: exiting now

    .

    .

    .

    Failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden


    I am new to this platform. I request you kindly suggest to me a fix for these issues.

  43. Anonymous

    Hii guys,


    In step 3 I get this error.....

    + kubectl get pods --all-namespaces

    The connection to the server localhost:8080 was refused - did you specify the right host or port?+ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlunable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refusedunable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refusedunable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refusedunable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refusedunable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refusedunable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused+ wait_for_pods_running 8 kube-system+ NS=kube-system+ CMD='kubectl get pods --all-namespaces '+ '[' kube-system '!=' all-namespaces ']'+ CMD='kubectl get pods -n kube-system '+ KEYWORD=Running+ '[' 2 == 3 ']'+ CMD2='kubectl get pods -n kube-system | grep "Running" | wc -l'++ eval 'kubectl get pods -n kube-system | grep "Running" | wc -l'+++ kubectl get pods -n kube-system+++ grep Running+++ wc -lThe connect


    This part below keeps looping continuosly

    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    + NUMPODS=0
    + echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
    > waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
    + '[' 0 -lt 8 ']'
    + sleep 5
    ++ eval 'kubectl get pods -n kube-system | grep "Running" | wc -l'
    +++ kubectl get pods -n kube-system
    +++ grep Running
    +++ wc -l


    Any ideas...I have tried editing the infra.rc file based on comments but it isnt working

    1. You might want to change docker version to "" and k8 to 1.16.0 (I believe the one present there is 1.15.9). I'll also have to do some modificions on all th files in the dep folder (and subfolders) to fix help versions (they are not fixed by the clould_init.sh script).

Write a comment…