Skip to end of metadata
Go to start of metadata
VM Minimum Requirements for RIC 22


NOTE: sudo access is required for installation

Getting Started PDF

Step 1: Obtaining the Deployment Scripts and Charts

NOTE:  cd to your directory where installation will be, i.e. /home/user

Run ...

$ sudo -i
$ git clone http://gerrit.o-ran-sc.org/r/it/dep -b bronze
$ cd dep
$ git submodule update --init --recursive --remote
Step 2: Generation of cloud-init Script 

This step will generate a script that will set up a one-node Kubernetes cluster for installing the SMO components.  The resultant script can run as the cloud-init script when launching the VM, or run at a root shell after the VM is launched.

Note, because the O-RAN SC Bronze SMO consists components from ONAP Frankfurt release, the infrastructure software stack (Docker, Kubernetes, and Helm) versions are as required by the ONAP deployment.  It is a different combination compared to what the Near RT RIC requires.

Run ...

$ cd tools/k8s/etc
# edit the infra.rc file to specify the versions of docker, kubernetes, and helm.
# For SMO, the versions are as required by the ONAP Frankfurt release.
# Comment out the lines below the heading of "RIC tested"; and un-comment
# the lines following the heading of "ONAP Frankfurt"

$ cd ../bin
$ ./gen-cloud-init.sh   # will generate the stack install script for what RIC needs
# The outputted script is will be used for preparing K8 cluster for RIC to deploy
# to file is  "k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh"

Step 3: Installation of Kubernetes, Helm, Docker, etc.

Run ...

$ ./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh

NOTE: Be patient as this takes some time to complete. Upon completion of this script, the VM will be rebooted.  You will then need to login once again.

$ sudo -i
$ kubectl get pods --all-namespaces  # There should be  9 pods running in kube-system namespace.
Step 4:  Deploy SMO

O-RAN Bronze SMO consists of components from three separate groups: ONAP, Non RT RIC, and RIC-AUX.  It is deployed by running a script under smo/bin.

Run ...

$ cd smo/bin
$ ./install

# If this is the first time SMO is installed on the SMO VM, it suggested to run "./install initlocalrepo" instead.
# The extra argument also initializes the local Helm repo.

# Also if this is the first time SMO is installed on the SMO VM, because it involves the preparation of the ONAP
# helm charts, the instillation may take hours to finish.  It is suggested to run the install as a nohup command
# in background, just in case connection to the VM is interrupted.

# Upon successful deployment, the "kubectl get pods --all-namespaces" command should show 8 pods in nonrtric
# namespace, 27 pods/jobs in onap namespace, and 2 pods in ricaux name space, all in Running or Completed state.
Helpful Hints

Kubectl commads:

kubectl get pods -n nampespace - gets a list of Pods running

kubectl get logs -n namespace name_of_running_pod






Complete these tasks to get started


Recent space activity

Space contributors

{"mode":"list","scope":"descendants","limit":"5","showLastTime":"true","order":"update","contextEntityId":20875208}

  • No labels

28 Comments

  1. hello experts,

    after onap deployment, i have a pod not ready but completed, is it normal? thanks

    dev-sdnc-sdnrdb-init-job-6v525 0/1 Completed 0 18m 10.244.0.159 smobronze <none> <none>

    and is it possible to set the pullPolicy to IfNotPresent for all the charts of onap?

    1. That is okay.  You can see that it is actually a job, not a regular pod.  You may view its status by:

      kubectl get jobs -n onap

      or even: 

      kubectl get jobs -n onap dev-sdnc-sdnrdb-init-job -o json
      1. Anonymous

        Hi Lusheng,

        thanks for quick reply. Now I have smo fully deployed. thanks

        (10:57 dabs@smobronze bin) > sudo kubectl get jobs --all-namespaces
        NAMESPACE NAME COMPLETIONS DURATION AGE
        onap dev-sdnc-sdnrdb-init-job 1/1 34m 74m

  2. Anonymous

    Hi. I am new to O-RAN and trying to deploy SMO as well as RIC. I was successfully able to install SMO with the given steps but while deploying I keep on getting below messages:-

    ===> Deploying OAM (ONAP Lite)
    ======> Deploying ONAP-lite
    Error: unknown command "deploy" for "helm"
    Run 'helm --help' for usage.
    ======> Waiting for ONAP-lite to reach operatoinal state
    0/7 SDNC-SDNR pods and 0/7 Message Router pods running
    0/7 SDNC-SDNR pods and 0/7 Message Router pods running
    0/7 SDNC-SDNR pods and 0/7 Message Router pods running


    None of these pods are getting up and running for like hours, infact none of them are even present. Did I miss any step or do i need to install any component of ONAP before this?

    1. I had once encountered this problem:

      make sure deploy/undeploy plugins exist in /root/.helm:

      dabs@RICBronze:~/oran/dep/smo/bin/smo-deploy/smo-oom$sudo cp -R ./kubernetes/helm/plugins/ /root/.helm

      1. Thanks for prompt response.

        Even after copying the plugins, I am getting same messages. But I am getting initial error message as :-

        ===> Deploying OAM (ONAP Lite)
        ======> Deploying ONAP-lite
        fetching local/onap
        Error: chart "onap" matching version "" not found in local index. (try 'helm repo update'). no chart name found
        mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/charts/*': No such file or directory
        mv: cannot stat '/root/.helm/plugins/deploy/cache/onap-subcharts/common': No such file or directory
        rm: cannot remove '/root/.helm/plugins/deploy/cache/onap/requirements.lock': No such file or directory
        mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/requirements.yaml': No such file or directory
        Error: could not find a ready tiller pod
        release "dev" deployed
        Error: could not find a ready tiller pod
        Error: could not find a ready tiller pod
        ======> Waiting for ONAP-lite to reach operatoinal state
        0/7 SDNC-SDNR pods and 0/7 Message Router pods running
        0/7 SDNC-SDNR pods and 0/7 Message Router pods running


        Seems that the onap charts are not present. I even tried "helm repo update" but no luck. Can you share how to get them?

        1. Chirag,

          Did the step 1 through 3 complete without error for your installation?  It appears that your k8s cluster is not running correctly (i.e. error msg complaining about tiller pod not in ready state).

          Lusheng


        2. Your tiller pod is not ready. 

          1st, make sure the tiller pod is in running state

          2nd, make sure helm service is in running state:

          sudo helm serve

          3rd, make sure 'helm search onap' has 35 charts listed. If not, you can manually issue the 'make all' command:

          dabs@smobronze:~/oran/dep/smo/bin/smo-deploy/smo-oom/kubernetes$ sudo make all


          1. Anonymous

            Zhengwei,

            I executed the instruction, sudo make all,but I got the following error.

            Downloading common from repo http://127.0.0.1:8879/charts
            Save error occurred: could not download http://127.0.0.1:8879/common-6.0.0.tgz: Get http://127.0.0.1:8879/common-6.0.0.tgz: dial tcp 127.0.0.1:8879: connect: connection refused
            Deleting newly downloaded charts, restoring pre-update state
            Error: could not download http://127.0.0.1:8879/common-6.0.0.tgz: Get http://127.0.0.1:8879/common-6.0.0.tgz: dial tcp 127.0.0.1:8879: connect: connection refused
            Makefile:61: recipe for target 'dep-robot' failed
            make[1]: *** [dep-robot] Error 1
            make[1]: Leaving directory '/root/dep/smo/bin/smo-deploy/smo-oom/kubernetes'
            Makefile:46: recipe for target 'robot' failed
            make: *** [robot] Error 2

            How can I do ? Thank you,pls.

            Tim

            1. Anonymous

              Have you performed the 2nd step?

              2nd, make sure helm service is in running state:

              sudo helm serve

              1. Anonymous

                Yes. But  any expected respond should I recieve ?

                When I executed 2nd step, terminal stuck and show this message .

                Regenerating index. This may take a moment.

                Now serving you on 127.0.0.1:8879

                Should I keep waiting?

                Thanks for help.

                1. Anonymous

                  Yes. You can just put 'sudo helm serve' in one terminal, and it's the helm SERVER; and go on with the 'sudo make all' in another terminal.

                  1. Anonymous

                    Wow, Thank you for your help

                    I think my problem has been resolved .

          2. Anonymous

            I did all the 3 steps: still, I am getting the same error


            root@xxxx:~/dep/smo/bin# sudo ./install
            ===> Starting at Tue Sep 8 07:42:45 UTC 2020

            ===> Cleaning up any previous deployment
            ======> Deleting all Helm deployments
            Error: command 'delete' requires a release name
            ======> Clearing out all ONAP deployment resources
            Error from server (NotFound): namespaces "onap" not found
            No resources found.
            error: resource(s) were provided, but no name, label selector, or --all flag specified
            error: resource(s) were provided, but no name, label selector, or --all flag specified
            ======> Clearing out all RICAUX deployment resources
            Error from server (NotFound): namespaces "ricaux" not found
            Error from server (NotFound): namespaces "ricinfra" not found
            ======> Clearing out all NONRTRIC deployment resources
            Error from server (NotFound): namespaces "nonrtric" not found
            ======> Preparing for redeployment
            node/ebtic07 not labeled
            node/ebtic07 not labeled
            node/ebtic07 not labeled
            ======> Preparing working directory

            ===> Deploying OAM (ONAP Lite)
            ======> Deploying ONAP-lite
            Error: unknown command "deploy" for "helm"
            Run 'helm --help' for usage.
            ======> Waiting for ONAP-lite to reach operatoinal state
            0/7 SDNC-SDNR pods and 0/7 Message Router pods running
            0/7 SDNC-SDNR pods and 0/7 Message Router pods running
            0/7 SDNC-SDNR pods and 0/7 Message Router pods running

        3. What I found in my case is the version of helm that caused the issue - In step 2, I need to set the version as 2.16.6 in infra.rc and things  work fine. This is evident from the name of the script "

          k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh" (note the h_2_16 in it)
          1. Step 3 was going wrong. Helm was not getting initialized due to proxy server. Installed certificates and got it working.

            Thanks for the help (smile)

        4. Anonymous

          我也遇到这个了则么解决的

  3. Anonymous

    Hello experts ,

    I  followed it step by step but I stuck by step 3 .

    When I Run the script, k8s-1node-cloud-init-k ... .sh, I received the following message. 

    The connection to the server localhost:8080 was refused - did you specify the right host or port? 

    I don't know if I did something wrong or if I missed something I need to pay attention . 

    How can I fix it, Thank you .

    Tim.



    1. Anonymous

      I met exactly the same problem.  Appreciate you for the help!

      1. Anonymous

        can you pls share full debug out when running the script?

        1. Anonymous

          The whole situation as you can see below.

          ++ eval 'kubectl get pods -n kube-system | grep "Running" | wc -l'

          +++ wc -l

          +++ grep Running

          +++ kubectl get pods -n kube-system

          The connection to the server localhost:8080 was refused - did you specify the right host or port?

          + NUMPODS=0

          + echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'

          > waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]

          + '[' 0 -lt 8 ']'

          + sleep 5


          Tim.

          1. Anonymous

            Hi,

            do you have following info in the outputs? The 'localhost:8080 refused' problem generally relates to the KUBECONFIG configuration.

            + cd /root

            + rm -rf .kube

            + mkdir -p .kube

            + cp -i /etc/kubernetes/admin.conf /root/.kube/config

            + chown root:root /root/.kube/config

            + export KUBECONFIG=/root/.kube/config

            + KUBECONFIG=/root/.kube/config

            + echo KUBECONFIG=/root/.kube/config

            + kubectl get pods --all-namespaces

            No resources found.

            + kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

            podsecuritypolicy.policy/psp.flannel.unprivileged created

            clusterrole.rbac.authorization.k8s.io/flannel created

            clusterrolebinding.rbac.authorization.k8s.io/flannel created

            serviceaccount/flannel created

            configmap/kube-flannel-cfg created

            daemonset.apps/kube-flannel-ds-amd64 created

            daemonset.apps/kube-flannel-ds-arm64 created

            daemonset.apps/kube-flannel-ds-arm created

            daemonset.apps/kube-flannel-ds-ppc64le created

            daemonset.apps/kube-flannel-ds-s390x created

            1. Anonymous

              But there is no admin.conf under /etc/kubernetes/

              1. Anonymous

                suggest re-run the script and make sure 'kubeadm init' is successful:

                sudo ./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh

                1. Anonymous

                  'kubeadm init' has the following error:


                  I0818 01:09:09.191921 32088 version.go:248] remote version is much newer: v1.18.8; falling back to: stable-1.15
                  [init] Using Kubernetes version: v1.15.12
                  [preflight] Running pre-flight checks
                  error execution phase preflight: [preflight] Some fatal errors occurred:
                  [ERROR Swap]: running with swap on is not supported. Please disable swap
                  [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

                  1. Anonymous

                    [ERROR Swap]: running with swap on is not supported. Please disable swap

                    disable swap and re-run the script.

                    sudo swapoff -a

                    1. Thank you for kind help!  But after disabling swap and re-running the script, 'kubeadm init' has the new error:


                      I0819 12:54:05.234698 10134 version.go:251] remote version is much newer: v1.18.8; falling back to: stable-1.16
                      [init] Using Kubernetes version: v1.16.14
                      [preflight] Running pre-flight checks
                      [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.6. Latest validated version: 18.09
                      error execution phase preflight: [preflight] Some fatal errors occurred:
                      [ERROR Port-6443]: Port 6443 is in use
                      [ERROR Port-10251]: Port 10251 is in use
                      [ERROR Port-10252]: Port 10252 is in use
                      [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
                      [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
                      [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
                      [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
                      [ERROR Port-10250]: Port 10250 is in use
                      [ERROR Port-2379]: Port 2379 is in use
                      [ERROR Port-2380]: Port 2380 is in use
                      [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
                      [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
                      To see the stack trace of this error execute with --v=5 or higher

                      1. Anonymous

                        it seems that you already have an incomplete k8s cluster, try:

                        turn off swap  and re-run the script, 'sudo ./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh', which will perform 'kubeadm reset' to clear any existing k8s cluster.

Write a comment…