Instructions on how to perform a SMO installation can be found here. Note, currently, SMO consists of three discrete parts. In the near future, these three methods can be consolidated into a single operation.
Hi. I am new to O-RAN and trying to deploy SMO as well as RIC. I was successfully able to install SMO with the given steps but while deploying I keep on getting below messages:-
===> Deploying OAM (ONAP Lite) ======> Deploying ONAP-lite Error: unknown command "deploy" for "helm" Run 'helm --help' for usage. ======> Waiting for ONAP-lite to reach operatoinal state 0/7 SDNC-SDNR pods and 0/7 Message Router pods running 0/7 SDNC-SDNR pods and 0/7 Message Router pods running 0/7 SDNC-SDNR pods and 0/7 Message Router pods running
None of these pods are getting up and running for like hours, infact none of them are even present. Did I miss any step or do i need to install any component of ONAP before this?
Even after copying the plugins, I am getting same messages. But I am getting initial error message as :-
===> Deploying OAM (ONAP Lite) ======> Deploying ONAP-lite fetching local/onap Error: chart "onap" matching version "" not found in local index. (try 'helm repo update'). no chart name found mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/charts/*': No such file or directory mv: cannot stat '/root/.helm/plugins/deploy/cache/onap-subcharts/common': No such file or directory rm: cannot remove '/root/.helm/plugins/deploy/cache/onap/requirements.lock': No such file or directory mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/requirements.yaml': No such file or directory Error: could not find a ready tiller pod release "dev" deployed Error: could not find a ready tiller pod Error: could not find a ready tiller pod ======> Waiting for ONAP-lite to reach operatoinal state 0/7 SDNC-SDNR pods and 0/7 Message Router pods running 0/7 SDNC-SDNR pods and 0/7 Message Router pods running
Seems that the onap charts are not present. I even tried "helm repo update" but no luck. Can you share how to get them?
Did the step 1 through 3 complete without error for your installation? It appears that your k8s cluster is not running correctly (i.e. error msg complaining about tiller pod not in ready state).
I did all the 3 steps: still, I am getting the same error
root@xxxx:~/dep/smo/bin# sudo ./install ===> Starting at Tue Sep 8 07:42:45 UTC 2020
===> Cleaning up any previous deployment ======> Deleting all Helm deployments Error: command 'delete' requires a release name ======> Clearing out all ONAP deployment resources Error from server (NotFound): namespaces "onap" not found No resources found. error: resource(s) were provided, but no name, label selector, or --all flag specified error: resource(s) were provided, but no name, label selector, or --all flag specified ======> Clearing out all RICAUX deployment resources Error from server (NotFound): namespaces "ricaux" not found Error from server (NotFound): namespaces "ricinfra" not found ======> Clearing out all NONRTRIC deployment resources Error from server (NotFound): namespaces "nonrtric" not found ======> Preparing for redeployment node/ebtic07 not labeled node/ebtic07 not labeled node/ebtic07 not labeled ======> Preparing working directory
===> Deploying OAM (ONAP Lite) ======> Deploying ONAP-lite Error: unknown command "deploy" for "helm" Run 'helm --help' for usage. ======> Waiting for ONAP-lite to reach operatoinal state 0/7 SDNC-SDNR pods and 0/7 Message Router pods running 0/7 SDNC-SDNR pods and 0/7 Message Router pods running 0/7 SDNC-SDNR pods and 0/7 Message Router pods running
I had a similar error when I tried to re-install SMO after first installation (~/dep/smo/bin/uninstall && ~/dep/smo/bin/install).
The only solution I found was to delete all the dep/ folder and restart the installation process from scatch. It worked, but since Onap helm charts had to be gneerated again it took hours.
Also, it seems that the re-installation problem may come from the fact that install operations are performed directly after uninstall ones, while some modules may not be fully uninstalled yet.
You may try to add a sleep 20 just before the helm deploy command in the install script.
4. when I run install script, I got the folloing errors:
======> Deploying ONAP-lite fetching local/onap Error: chart "onap" matching version "" not found in local index. (try 'helm repo update'). no chart name found mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/charts/*': No such file or directory mv: cannot stat '/root/.helm/plugins/deploy/cache/onap-subcharts/common': No such file or directory rm: cannot remove '/root/.helm/plugins/deploy/cache/onap/requirements.lock': No such file or directory mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/requirements.yaml': No such file or directory Error: no Chart.yaml exists in directory "/root/.helm/plugins/deploy/cache/onap" release "dev" deployed
Can you try "helm deploy" command you might get deploy plugin not found. if so you need to do "helm plugin install ~/.helm/plugins/deploy" if deploy is not found then u can download them to ~/.helm/plugins/ from (
Hi, I think SMO didn't deploy correctly first time, so the corresponding folders where not copied to the .helm folder. Try deleting smo-deploy folder and installing it again, like that it will start the installation from scratch. It can take several hours.
What I found in my case is the version of helm that caused the issue - In step 2, I need to set the version as 2.16.6 in infra.rc and things work fine. This is evident from the name of the script "
k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh" (note the h_2_16 in it)
I0818 01:09:09.191921 32088 version.go:248] remote version is much newer: v1.18.8; falling back to: stable-1.15 [init] Using Kubernetes version: v1.15.12 [preflight] Running pre-flight checks error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Swap]: running with swap on is not supported. Please disable swap [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
Thank you for kind help! But after disabling swap and re-running the script, 'kubeadm init' has the new error:
I0819 12:54:05.234698 10134 version.go:251] remote version is much newer: v1.18.8; falling back to: stable-1.16 [init] Using Kubernetes version: v1.16.14 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.6. Latest validated version: 18.09 error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR Port-6443]: Port 6443 is in use [ERROR Port-10251]: Port 10251 is in use [ERROR Port-10252]: Port 10252 is in use [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists [ERROR Port-10250]: Port 10250 is in use [ERROR Port-2379]: Port 2379 is in use [ERROR Port-2380]: Port 2380 is in use [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher
it seems that you already have an incomplete k8s cluster, try:
turn off swap and re-run the script, 'sudo ./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh', which will perform 'kubeadm reset' to clear any existing k8s cluster.
I am facing the exact same issue. I know that this is after a very long time, but did you figure out the solution for it? your help would be much appreciated.
Check for errors in the script before the localhost error, in my case it has a docker version error, i had to change the version in the k8s-1node-cloud-init-k_1_15-h_2_17-d_19_03.sh file.
Also for this docker version you have to change the ubuntu version to "18.04.02" in the file, if not it won't find the file in the repository:
elif [[ ${UBUNTU_RELEASE} == 18.* ]]; then
echo "Installing on Ubuntu $UBUNTU_RELEASE (Bionic Beaver)"
if [ ! -z "${DOCKERV}" ]; then
DOCKERVERSION="${DOCKERV}-0ubuntu1~18.04.3" #THIS IS THE LINE I MODIFIED
+++ kubectl get pods -n kube-system +++ grep Running +++ wc -l The connection to the server localhost:8080 was refused - did you specify the right host or port? + NUMPODS=0 + echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]' > waiting for 0/8 pods running in namespace [kube-system] with keyword [Running] + '[' 0 -lt 8 ']'
I am a novice in these technologies, your help would be much appreciated.
Seems like docker is not getting installed.. Recently, I am facing an issue installing docker.io 18.09.7 over ubuntu 18.04.4... try changing docker version to empty ("") instead of 18.09 in infra.rc file.. It should run fine but not sure if you will face any other issues later on, as it is not tested
Please make sure your VM has more than 2 CPUs, otherwise the docker.io will not be installed at all. I have tried ubuntu 18.04.1 to 18.04.4, the ubuntu version is not the problem. Go with latest docker can help you get the 35 charts, but then you will find the ONAP-lite cannot reach operational state. I feel that we need to wait for an update...
After deploying SMO, how can I open O1 dashboard? I tried opening http://localhost:8181/odlux/index.html but receiving "connection refused". Can anyone please help?
I am following deployment steps for SMO. Upon running the k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh script, I come across the following error. Kindly share the solution if anyone has found it.
E: Version '18.09.7-0ubuntu1~18.04.4' for 'docker.io' was not found + cat ./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh: line 156: /etc/docker/daemon.json: No such file or directory + mkdir -p /etc/systemd/system/docker.service.d + systemctl enable docker.service Failed to enable unit: Unit file docker.service does not exist.
Also for this docker version you have to change the ubuntu version to "18.04.02" in the file, if not it won't find the file in the repository:
elif [[ ${UBUNTU_RELEASE} ==18.* ]]; then echo"Installing on Ubuntu $UBUNTU_RELEASE (Bionic Beaver)" if[ ! -z"${DOCKERV}"]; then DOCKERVERSION="${DOCKERV}-0ubuntu1~18.04.3"#THIS IS THE LINE I MODIFIED
Not every docker version can be found in the repository, so i found that version 19.03.6 for ubuntu 18.04.3 is available, and works well. The 18.04.2 worked before but throws another error now.
Hi Travis, I have tried your command 'helm init --stable-repo-url=https://charts.helm.sh/stable --client-only' but face the same issue.Is it required to rerun everything from the start?
I believe so, yes. If that command still does not work, try another of the commands mentioned in the link I posted. I'm assuming this happened after running the k8 script?
Please can you check whether docker image is getting pulled or not (kubectl describe po <pod name> -n onap), I faced the same issue and saw docker image was not getting pulled because for every 6 hours only 100 pulls are allowed in docker ce and k8s exceeds those many tries sometimes.
I had the exact same logs. Then I found out those 'release "xxx" deployed' were not deployed at all. You might want to check deploy logs, which located in ~/.helm/plugin/deploy/cache/*/logs . In my case, it complains something about 'no matches for kind "StatefulSet" in version'. It was solved after I switch K8S version to 1.15.9.
This is infra.rc works for me: INFRA_DOCKER_VERSION="" INFRA_HELM_VERSION="2.17.0" INFRA_K8S_VERSION="1.15.9" INFRA_CNI_VERSION="0.7.5"
I left docker version blank so I don't have to edit the script as mentioned above. 19.03.6 for ubuntu 18.04.3 is installed at this time.
SMO installation was successful. I have run it on AWS instance. Any idea how one can check out SMO functioning? I mean any curl command or web page that can be tried?
Hi, I'm not sure about that. What I did was to deploy a RIC in another VM and configure it to work with the nonrtric, like that you can know that both nonrtric and ric are working...
elif [[ ${UBUNTU_RELEASE} == 18.* ]]; then echo "Installing on Ubuntu $UBUNTU_RELEASE (Bionic Beaver)" if [ ! -z "${DOCKERV}" ]; then DOCKERVERSION="${DOCKERV}-0ubuntu1~18.04.3" fi
.
.
dep/smo/bin/install
.
.
if [ "$1" == "initlocalrepo" ]; then echo && echo "===> Initialize local Helm repo" rm -rf ~/.helm #&& helm init -c # without this step helm serve may not work. helm init --stable-repo-url=https://charts.helm.sh/stable --client-only helm serve & helm repo add local http://127.0.0.1:8879 fi
I need help on running dcaemod using SMO Lite . Can i know what all other ONAP components should be enabled for running dcaemod ? Any kind of dependency on other modules of ONAP
There seems to be a mismatch in the VM minimum requirements on this page and the PDF linked in that section. The RAM required is mentioned as 32 GB and the CPU cores as 8; however, the "Getting Started PDF" mentions the required RAM as 16 GB and CPU cores as 4. So how much minimum RAM and CPU cores does the SMO VM actually require? Also, if these are the minimum requirements, what are the standard or recommended VM requirements? I suspect that the Getting Started PDF for Near Realtime RIC installation has been mistakenly linked here. Could someone confirm this?
The connection to the server localhost:8080 was refused - did you specify the right host or port?
+ NUMPODS=0
+ echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
+ '[' 0 -lt 8 ']'
+ sleep 5
I tried to solve this problem by reading the comments related to this subject but this issue still remains... My Ubuntu version is 18.04.1 LTS and i follow the previous steps.
I am trying to install cherry release. However compared to bronze I don't see "dep/smo/bin" folder anymore. So I can't run the install command. Can anyone suggest alternative? All steps till step-3 is successful, only step-4 is not valid for cherry release.
Yes, that's exactly I did, I also changed infra.rc file. I am also able to run RIC related services on a separate machine, but its SMO which is not getting installed. What are the steps to upgrade from bronze to cherry release?
unlike Bronze, Cherry doesn't have a single script (dep/smo/bin/install) which install onap, nonrtric & ric-aux. Instead they needed to be installed individually.
2. Update “dep/tools/k8s/etc/infra.rc” incorporating below versions. INFRA_DOCKER_VERSION="" INFRA_K8S_VERSION="1.15.9" INFRA_CNI_VERSION="0.7.5" INFRA_HELM_VERSION="2.17.0" 3. Update Ubuntu Version in “dep/tools/k8s/bin/k8s-1node-cloud-init-k_1_15-h_2_17-d_cur.sh” as elif [[ ${UBUNTU_RELEASE} == 18.* ]]; then echo "Installing on Ubuntu $UBUNTU_RELEASE (Bionic Beaver)" if [ ! -z "${DOCKERV}" ]; then DOCKERVERSION="${DOCKERV}-0ubuntu1~18.04.3" fi
Note: Unlike Bronze, in Cherry there is no comprehensive installation file “dep/smo/bin/install” which install all SMO components i.e. ONAP , NONRTRIC, RIC-AUX & RIC-INFRA; instead they need to be manually installed in the order of ONAP -> NONRTRIC -> RIC-AUX.
4. Install ONAP first, this would take a very long time, up to 6-7 hours, don’t give up! Uncomment all commented sections in file "dep/tools/onap/install" cd dep/tools/onap/ ./install initlocalrepo
Its quite possible that the execution would get stuck in below line 4/7 SDNC-SDNR pods and 7/7 Message Router pods running
In such situation stop the execution and proceed to manually perform the remaining steps in the scripts which are basically below steps. Otherwise please ignore step-5,6 which are automatically part of install script
5. Install non-RT-RIC a. Update “baseUrl” in “dep/nonrtric/RECIPE_EXAMPLE/example_recipe.yaml” b. cd dep/bin c. ./deploy-nonrtric -f ~/dep/nonrtric/RECIPE_EXAMPLE/example_recipe.yaml d. Wait until “kubectl get pods -n nonrtric” all pods are running
6. Install ric-aux & ric-infra
kubectl create ns ricinfra cd ~/dep/ric-aux/helm/infrastructure helm dep update cd ../ helm install -f ~/dep/RECIPE_EXAMPLE/AUX/example_recipe.yaml --name cherry-infra --namespace ricaux ./infrastructure cd ves/ helm dep update cd .. helm install -f ~/dep/RECIPE_EXAMPLE/AUX/example_recipe.yaml --name cherry-ves --namespace ricaux ./ves Wait until ric-aux & ric-infra pods are up & running, verify using “kubectl get pods --all-namespaces”
Seemly, the mismatch between k8s version , docker version and helm version is causing many issues. Could anyone update the latest load line? I'm using Ubuntu server 18.04.05 and not able to done the steps successfully.
While running the install script getting below error
error: resource(s) were provided, but no name, label selector, or --all flag specified error: resource(s) were provided, but no name, label selector, or --all flag specified ======> Clearing out all RICAUX deployment resources Error from server (NotFound): namespaces "ricaux" not found Error from server (NotFound): namespaces "ricinfra" not found ======> Clearing out all NONRTRIC deployment resources Error from server (NotFound): namespaces "nonrtric" not found ======> Preparing for redeployment
Error: no repo named "local" found + helm repo add local http://127.0.0.1:8879/charts "local" has been added to your repositories
...
Downloading nonrtric-common from repo http://127.0.0.1:8879/charts Deleting outdated charts Successfully packaged chart and saved it to: /tmp/nonrtricgateway-1.0.0.tgz No requirements found in /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric/charts. Error: unknown flag: --force-update Finished Packaging NONRTRIC components [controlpanel a1controller a1simulator policymanagementservice enrichmentservice rappcatalogueservice nonrtricgateway]
...
Downloading nonrtric-common from repo http://127.0.0.1:8879/charts Deleting outdated charts Successfully packaged chart and saved it to: /tmp/nonrtricgateway-1.0.0.tgz No requirements found in /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric/charts. Error: unknown flag: --force-update Finished Packaging NONRTRIC components [controlpanel a1controller a1simulator policymanagementservice enrichmentservice rappcatalogueservice nonrtricgateway] namespace/nonrtric created configmap/nonrtric-recipe created Deploying NONRTRIC helm install -f /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml --namespace nonrtric --name r2-dev-nonrtric /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/bin/../nonrtric/helm/nonrtric Error: release r2-dev-nonrtric failed: no objects visited ======> Waiting for NONRTRIC to reach operatoinal state
example_recipe.yaml and
When trying to run the last line manually this is what i get:
yev@yev:/media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC$ sudo helm install -f /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml --namespace nonrtric --name r2-dev-nonrtric /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/bin/../nonrtric/helm/nonrtric
Error: a release named r2-dev-nonrtric already exists.
Run: helm ls --all r2-dev-nonrtric; to check the status of the release
Or run: helm del --purge r2-dev-nonrtric; to delete it
yev@yev:/media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC$ sudo helm ls --all r2-dev-nonrtric
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
r2-dev-nonrtric 1 Wed Jun 9 15:43:42 2021 FAILED nonrtric-2.0.0 nonrtric
purging and restarting again doesnt help
I made a folder in /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric called charts and copied Chart.yaml and values.yaml from nonrtric into it. Then i tried running the install script and i got:
======> Deploying ONAP-lite
fetching local/onap
Error: chart "onap" matching version "" not found in local index. (try 'helm repo update'). no chart name found
mv: cannot stat '/home/yev/.helm/plugins/deploy/cache/onap/charts/*': No such file or directory
mv: cannot stat '/home/yev/.helm/plugins/deploy/cache/onap-subcharts/common': No such file or directory
rm: cannot remove '/home/yev/.helm/plugins/deploy/cache/onap/requirements.lock': No such file or directory
mv: cannot stat '/home/yev/.helm/plugins/deploy/cache/onap/requirements.yaml': No such file or directory
Error: no Chart.yaml exists in directory "/home/yev/.helm/plugins/deploy/cache/onap"
release "dev" deployed
So I had to run sudo make all in /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-oom/kubernetes
and I made sure that my copied charts do exist afterwards. Then I ran the install script again and I saw a different error:
Successfully packaged chart and saved it to: /tmp/nonrtricgateway-1.0.0.tgz
Error: error unpacking values.yaml in nonrtric: chart metadata (Chart.yaml) missing
Error: unknown flag: --force-update
Finished Packaging NONRTRIC components [controlpanel a1controller a1simulator policymanagementservice enrichmentservice rappcatalogueservice nonrtricgateway]
namespace/nonrtric created
configmap/nonrtric-recipe created
Deploying NONRTRIC
helm install -f /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml --namespace nonrtric --name r2-dev-nonrtric /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/bin/../nonrtric/helm/nonrtric
Error: error unpacking Chart.yaml in nonrtric: chart metadata (Chart.yaml) missing
Error from server (BadRequest): a container name must be specified for pod bronze-infra-kong-68657d8dfd-285kf, choose one of: [ingress-controller proxy]
Error from server (BadRequest): container "ingress-controller" in pod "bronze-infra-kong-68657d8dfd-285kf" is waiting to start: trying and failing to pull image
The a1-sim-std2-0 in nonrtric namespace is not coming up.
The logs presents this message:
Version folder for simulator: STD_2.0.0
APIPATH set to: /usr/src/app/api/STD_2.0.0
PYTHONPATH set to: /usr/src/app/src/common
src/start.sh: line 38: cd: STD_2.0.0: No such file or directory
Path to callBack.py: /usr/src/app/src
Path to main.py: /usr/src/app/src
python: can't open file 'main.py': [Errno 2] No such file or directory
python: can't open file 'callBack.py': [Errno 2] No such file or directory
root@ip-172-31-36-165:~# kubectl get nos
error: the server doesn't have a resource type "nos"
This pod is using the image "nexus3.o-ran-sc.org:10002/o-ran-sc/a1-simulator:2.0.0". Below the pod describe:
After running "./install initlocalrepo" there are 9 "kube-system" pods runnings, 26 "onap" pods running, 1 "onap" job completed, 3 "onap" jobs "Init:Error". Not sure if the error jobs matter, since there is 1 completed job?
In any case there are none of any other pods. The console output shows the following:
...
Packaging NONRTRIC component [nonrtricgateway] Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "local" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. Saving 1 charts Downloading nonrtric-common from repo http://127.0.0.1:8879/charts Deleting outdated charts Successfully packaged chart and saved it to: /tmp/nonrtricgateway-1.0.0.tgz No requirements found in /root/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric/charts. Error: unknown flag: --force-update Finished Packaging NONRTRIC components [controlpanel a1controller a1simulator policymanagementservice enrichmentservice rappcatalogueservice nonrtricgateway] Chart name- namespace/nonrtric created Install Kong- false configmap/nonrtric-recipe created Deploying NONRTRIC helm install -f /root/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml --namespace nonrtric --name r3-dev-nonrtric /root/dep/smo/bin/smo-deploy/smo-dep/bin/../nonrtric/helm/nonrtric Error: render error in "nonrtric/templates/pv2.yaml": template: nonrtric/templates/pv2.yaml:23:16: executing "nonrtric/templates/pv2.yaml" at <include "common.namespace.nonrtric" .>: error calling include: template: no template "common.namespace.nonrtric" associated with template "gotpl" ======> Waiting for NONRTRIC to reach operatoinal state No resources found. No resources found. No resources found. No resources found. No resources found. 0/1 A1Controller pods, 0/1 ControlPanel pods, 0/1 DB pods, 0/1 PolicyManagementService pods, and 0/4 A1Sim pods running No resources found. No resources found. No resources found. No resources found. No resources found.
...
This waiting just loops forever. Please advise. Thank you
Concerning the Bronze install, I used the contents of this requirements.yaml above, however encountered the same error. What changes need to be made to the file /root/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml? Thanks
...
2290 Deploying NONRTRIC 2291 helm install -f /root/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml --namespace nonrtric --name r3-dev-nonrtric /root/dep/smo/bin/smo-deploy/smo-dep /bin/../nonrtric/helm/nonrtric 2292 Error: render error in "nonrtric/templates/pv2.yaml": template: nonrtric/templates/pv2.yaml:23:16: executing "nonrtric/templates/pv2.yaml" at <include "common.namespace.nonrtric " .>: error calling include: template: no template "common.namespace.nonrtric" associated with template "gotpl" 2293 ======> Waiting for NONRTRIC to reach operatoinal state
I ran the deploy-nonrtric command above as you suggested. Here is the last part of the output, where it errors + a new "already exists" error. Please advise? Thanks
... Successfully packaged chart and saved it to: /tmp/nonrtricgateway-1.0.0.tgz No requirements found in /root/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric/charts. Error: unknown flag: --force-update Finished Packaging NONRTRIC components [controlpanel a1controller a1simulator policymanagementservice enrichmentservice rappcatalogueservice nonrtricgateway] Chart name- Install Kong- false Error from server (AlreadyExists): configmaps "nonrtric-recipe" already exists Deploying NONRTRIC helm install -f ../RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml --namespace nonrtric --name r3-dev-nonrtric /root/dep/smo/bin/smo-deploy/smo-dep/bin/../nonrtric/helm/nonrtric Error: render error in "nonrtric/templates/pv2.yaml": template: nonrtric/templates/pv2.yaml:23:16: executing "nonrtric/templates/pv2.yaml" at <include "common.namespace.nonrtric" .>: error calling include: template: no template "common.namespace.nonrtric" associated with template "gotpl"
The error is stating there's no requirements file did you see if there's a requirement.yaml in the directory /root/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric?
if not create the file with the following content:
In the bronze dep/smo/bin/install script it appears that the following git repo is missing the requirements.yaml file in the norrtric directory above that you mentioned.
How did you execute the ./install again? Because when I create this file and run ./install again I get this error: 0/7 SDNC-SDNR pods and 0/7 Message Router pods running. I have to remove the whole "dep" folder and re-run the whole proccess again in order to not see this error but the "smo-deploy" doesn't exist until I run the ./install command. Can you provide me more steps?
I'm facing the same error than Junior Salemand Deepak Duttwith pod "cherry-infra-kong" trying to install Cherry release
Error from server (BadRequest): container "ingress-controller" in pod "cherry-infra-kong-7d6555954f-thh7j" is waiting to start: trying and failing to pull image
# Kong Ingress Controller's primary purpose is to satisfy Ingress resources # created in k8s. It uses CRDs for more fine grained control over routing and # for Kong specific configuration. ingressController: enabled: true image: repository: kong/kubernetes-ingress-controller tag: 0.7.0
++ eval 'kubectl get pods -n kube-system | grep "Running" | wc -l' +++ kubectl get pods -n kube-system +++ grep Running +++ wc -l The connection to the server localhost:8080 was refused - did you specify the right host or port? + NUMPODS=0 + echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]' > waiting for 0/8 pods running in namespace [kube-system] with keyword [Running] + '[' 0 -lt 8 ']' + sleep 5
this thing happened after i do this steps from comment:
After Step 1 and Before Step #2 Change the following files
elif [[ ${UBUNTU_RELEASE} == 18.* ]]; then echo "Installing on Ubuntu $UBUNTU_RELEASE (Bionic Beaver)" if [ ! -z "${DOCKERV}" ]; then DOCKERVERSION="${DOCKERV}-0ubuntu1~18.04.3" fi
.
.
dep/smo/bin/install
.
.
if [ "$1" == "initlocalrepo" ]; then echo && echo "===> Initialize local Helm repo" rm-rf ~/.helm#&& helm init -c # without this step helm serve may not work. helm init --stable-repo-url=https://charts.helm.sh/stable--client-only helm serve & helm repo add localhttp://127.0.0.1:8879 fi
.
.
if anyone have solution it will be very helpful for me
I have deployed SMO for the first time, the process was going well for quite a time, but suddenly got stuck in the step below, showing no action and no error.
Does anyone know CNI configuration which is required for this exercise as my kubelet services are not started probably ? Any help is much appreciated thanks.
Step 3: Installation of Kubernetes, Helm, Docker, etc.
Run ...
$ ./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh
[kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher
$ systemctl status kubelet
kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Tue 2021-08-24 01:10:22 EDT; 2h 29min ago Docs: https://kubernetes.io/docs/home/ Main PID: 10034 (kubelet) Tasks: 21 (limit: 1112) CGroup: /system.slice/kubelet.service └─10034 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup
Aug 24 03:39:36 aaronchu-ubuntu1804-libvirt2 kubelet[10034]: E0824 03:39:36.824206 10034 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNo Aug 24 03:39:40 aaronchu-ubuntu1804-libvirt2 kubelet[10034]: W0824 03:39:40.667054 10034 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d Aug 24 03:39:41 aaronchu-ubuntu1804-libvirt2 kubelet[10034]: E0824 03:39:41.838700 10034 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNo Aug 24 03:39:42 aaronchu-ubuntu1804-libvirt2 kubelet[10034]: E0824 03:39:42.096513 10034 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied na Aug 24 03:39:45 aaronchu-ubuntu1804-libvirt2 kubelet[10034]: W0824 03:39:45.667522 10034 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
I have noticed that the script seems not trying to install docker. By the time it needs docker, the system throws error. Could somebody tell me if I need to manually install docker beforehand?
You don't need to manually pre-install Docker. As a temporary workaround, you can leave the INFRA_DOCKER_VERSION parameter in infra.rc blank. This will allow you to pass through this step successfully.
Yes, you are right. It seems the docker version caused the docker not installed. After leave it blank, docker is installed, but the Kubernetes seems not create a networking container such as flannel so that the coredns containers failed to come up and Kubernetes only launched 7 containers instead of 8.
→ i guess that there should be one of this ip root@smo:~/dep/bin# kubectl get services -n ricaux NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cherry-infra-kong-proxy NodePort 10.108.153.144 <none> 80:32080/TCP,443:32443/TCP 2d22h ric-entry ClusterIP 10.102.52.174 <none> 80/TCP,443/TCP 2d22h service-ricaux-ves-http ClusterIP 10.98.92.60 <none> 8080/TCP,8443/TCP 2d22h
On the installation recipe you can control the ingress IPs:
# ricip should be the ingress controller listening IP for the platform cluster # auxip should be the ingress controller listening IP for the AUX cluster extsvcaux: ricip: "10.0.0.1" auxip: "10.0.0.1"
Federico Rossi i think that i don't explain my problem deep enough, sorry.
The problem is with theSMO policy Control Panel webpage. im connecting to remote server with RIC/SMO with SSH withfloating vm ip. on webrowser host, i use floating ip and dedicated port address to access Control Panel : 10.254.185.67:30091 but Control Panel should use local server addreesses,10.0.2.101 or kong address(that i'm not sure), to get policy data instead of using the one i using to conenct through browser. "Http failure response forhttp://10.254.185.67:30091/a1-policy/v2/policy-types: 502 Bad Gateway" - error returned by Control Panel webpage.
so, to sum up, Control Panel webpage try to use floating vm address to get policy data instead of local server adresses and the result is fail.
Your request http://10.254.185.67:30091/a1-policy/v2/policy-types is actually calling the policymanagentservice you can try and query directly the policymanagement and see if it works. Maybe the 500 error comes from there.
Not the answer you were looking for but should give you a little more details. You can change the configmaps and restart the pods to apply a different config.
The connection to the server localhost:8080 was refused - did you specify the right host or port? + NUMPODS=0 + echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]' > waiting for 0/8 pods running in namespace [kube-system] with keyword [Running] + '[' 0 -lt 8 ']' + sleep 5 ++ eval 'kubectl get pods -n kube-system | grep "Running" | wc -l' +++ kubectl get pods -n kube-system +++ grep Running +++ wc -l
Any ideas...I have tried editing the infra.rc file based on comments but it isnt working
You might want to change docker version to "" and k8 to 1.16.0 (I believe the one present there is 1.15.9). I'll also have to do some modificions on all th files in the dep folder (and subfolders) to fix help versions (they are not fixed by the clould_init.sh script).
I am trying to make the SMO installation in a virtual machine but I cannot do because it appears the following message: "Unsupported kubernetes version required".
I installed kubernetes1.17.2 and uninstall kubernetes1.23.4, but it reinstalled 1.23.4 and I could not finish the k8s script. Do you know why? Thank you (I installed helm2.17.0)
I am almost finishing the SMO Installation, but I am facing up with this problem. Can somebody figure it out?
root@oran-smo:~/dep/smo/bin# ./install ===> Starting at Fri Mar 4 11:42:18 UTC 2022
===> Cleaning up any previous deployment ======> Deleting all Helm deployments Error: command 'delete' requires a release name ======> Clearing out all ONAP deployment resources Error from server (NotFound): namespaces "onap" not found No resources found error: resource(s) were provided, but no name, label selector, or --all flag specified error: resource(s) were provided, but no name, label selector, or --all flag specified ======> Clearing out all RICAUX deployment resources Error from server (NotFound): namespaces "ricaux" not found Error from server (NotFound): namespaces "ricinfra" not found ======> Clearing out all NONRTRIC deployment resources Error from server (NotFound): namespaces "nonrtric" not found ======> Preparing for redeployment node/oran-smo not labeled node/oran-smo not labeled node/oran-smo not labeled ======> Preparing working directory
I'll reply the question myself. Probably someone already knew it, but I still put here as reference for new comers. Any comments are welcome.
Below is the notes list for SMO quick deployment in E-release,
Do not try the steps in the beginning of this page. It's for bronze release. You will find no smo folder anymore in step 4. (I got stuck there and had to restart)
Create Ubuntu 20.04 VM. (I allocated 4 CPUs and 16G RAM for it)
Follow steps in dep/smo-install/README.md in Quick Installation section. After quite a while, the following output appears
# cd dep
# smo-install/scripts/layer-2/2-install-oran.sh
(long output ...)
NAME STATUS AGE
kube-system Active 30m
kube-public Active 30m
kube-node-lease Active 30m
default Active 29m
onap Active 13m
nonrtric Active 109s
I am trying out the SMO installation and I am running into possibly an infinite loop in step 3.
I must mention that I separately installed docker and left it blank in the infra.rc file since I was getting the ,
"The connection to the server localhost:8080 was refused - did you specify the right host or port?" error. However, I was able to pass that by doing the above but I am stuck after only 5/8 pods.
+ NUMPODS=5 + echo '> waiting for 5/8 pods running in namespace [kube-system] with keyword [Running]' > waiting for 5/8 pods running in namespace [kube-system] with keyword [Running] + '[' 5 -lt 8 ']' + sleep 5
This is displayed for a very long time now. Any help to overcome this issue will be highly appreciated.
I am trying to install cherry release of SMO.I managed to successfully complete first 3 steps of installation. I am having below issue with 4th step. step remains in loop forever. I have checked the output for POD state in onap name space. Some pods remain init state and some frequently crash. Find below out put pod state. Can anyone guide on how to resolve below issue?
* Trying 127.0.0.1:32080... * TCP_NODELAY set * connect to 127.0.0.1 port 32080 failed: Connection timed out * Failed to connect to localhost port 32080: Connection timed out * Closing connection 0 curl: (28) Failed to connect to localhost port 32080: Connection timed out root@ip-172-31-93-71:/home/ubuntu/ric-dep/bin#
I had the same problem, core-dns pods were always in status "pending". It's necessary to deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Sometimes (I don't know why), these pods don´t create correctly, and you need to force them.
To solve this you can execute, while the others pods are deploying:
Need to remind folks to check where the issue is being discussed. This page is dedicated to SMO installation, and this does not seem to be related to SMO installation.
Mahesh Jethanandani Earlier there used to be consolidated step by step guide for SMO installation (authored by farheen Cefalu) ,till bronze release. That link is no longer available. Could please share link for Kubernetes cluster based installation for SMO and Gerrit links for software downloads?
Hi santhosh kumar, that process is no longer valid. SMO is now three different projects. Depending on what you are trying to do, you can lookup docs.o-ran-sc.org for installation instruction.
190 Comments
Zhengwei Gao
hello experts,
after onap deployment, i have a pod not ready but completed, is it normal? thanks
dev-sdnc-sdnrdb-init-job-6v525 0/1 Completed 0 18m 10.244.0.159 smobronze <none> <none>
and is it possible to set the pullPolicy to IfNotPresent for all the charts of onap?
user-d3360
That is okay. You can see that it is actually a job, not a regular pod. You may view its status by:
or even:
Anonymous
Hi Lusheng,
thanks for quick reply. Now I have smo fully deployed. thanks
(10:57 dabs@smobronze bin) > sudo kubectl get jobs --all-namespaces
NAMESPACE NAME COMPLETIONS DURATION AGE
onap dev-sdnc-sdnrdb-init-job 1/1 34m 74m
Anonymous
Hi. I am new to O-RAN and trying to deploy SMO as well as RIC. I was successfully able to install SMO with the given steps but while deploying I keep on getting below messages:-
===> Deploying OAM (ONAP Lite)
======> Deploying ONAP-lite
Error: unknown command "deploy" for "helm"
Run 'helm --help' for usage.
======> Waiting for ONAP-lite to reach operatoinal state
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
None of these pods are getting up and running for like hours, infact none of them are even present. Did I miss any step or do i need to install any component of ONAP before this?
Zhengwei Gao
I had once encountered this problem:
make sure deploy/undeploy plugins exist in /root/.helm:
dabs@RICBronze:~/oran/dep/smo/bin/smo-deploy/smo-oom$sudo cp -R ./kubernetes/helm/plugins/ /root/.helm
Chirag Gandhi
Thanks for prompt response.
Even after copying the plugins, I am getting same messages. But I am getting initial error message as :-
===> Deploying OAM (ONAP Lite)
======> Deploying ONAP-lite
fetching local/onap
Error: chart "onap" matching version "" not found in local index. (try 'helm repo update'). no chart name found
mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/charts/*': No such file or directory
mv: cannot stat '/root/.helm/plugins/deploy/cache/onap-subcharts/common': No such file or directory
rm: cannot remove '/root/.helm/plugins/deploy/cache/onap/requirements.lock': No such file or directory
mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/requirements.yaml': No such file or directory
Error: could not find a ready tiller pod
release "dev" deployed
Error: could not find a ready tiller pod
Error: could not find a ready tiller pod
======> Waiting for ONAP-lite to reach operatoinal state
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
Seems that the onap charts are not present. I even tried "helm repo update" but no luck. Can you share how to get them?
user-d3360
Chirag,
Did the step 1 through 3 complete without error for your installation? It appears that your k8s cluster is not running correctly (i.e. error msg complaining about tiller pod not in ready state).
Lusheng
Zhengwei Gao
Your tiller pod is not ready.
1st, make sure the tiller pod is in running state
2nd, make sure helm service is in running state:
sudo helm serve
3rd, make sure 'helm search onap' has 35 charts listed. If not, you can manually issue the 'make all' command:
dabs@smobronze:~/oran/dep/smo/bin/smo-deploy/smo-oom/kubernetes$ sudo make all
Anonymous
Zhengwei,
I executed the instruction, sudo make all,but I got the following error.
Downloading common from repo http://127.0.0.1:8879/charts
Save error occurred: could not download http://127.0.0.1:8879/common-6.0.0.tgz: Get http://127.0.0.1:8879/common-6.0.0.tgz: dial tcp 127.0.0.1:8879: connect: connection refused
Deleting newly downloaded charts, restoring pre-update state
Error: could not download http://127.0.0.1:8879/common-6.0.0.tgz: Get http://127.0.0.1:8879/common-6.0.0.tgz: dial tcp 127.0.0.1:8879: connect: connection refused
Makefile:61: recipe for target 'dep-robot' failed
make[1]: *** [dep-robot] Error 1
make[1]: Leaving directory '/root/dep/smo/bin/smo-deploy/smo-oom/kubernetes'
Makefile:46: recipe for target 'robot' failed
make: *** [robot] Error 2
How can I do ? Thank you,pls.
Tim
Anonymous
Have you performed the 2nd step?
Anonymous
Yes. But any expected respond should I recieve ?
When I executed 2nd step, terminal stuck and show this message .
Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879
Should I keep waiting?
Thanks for help.
Anonymous
Yes. You can just put 'sudo helm serve' in one terminal, and it's the helm SERVER; and go on with the 'sudo make all' in another terminal.
Anonymous
Wow, Thank you for your help
I think my problem has been resolved .
Anonymous
I did all the 3 steps: still, I am getting the same error
root@xxxx:~/dep/smo/bin# sudo ./install
===> Starting at Tue Sep 8 07:42:45 UTC 2020
===> Cleaning up any previous deployment
======> Deleting all Helm deployments
Error: command 'delete' requires a release name
======> Clearing out all ONAP deployment resources
Error from server (NotFound): namespaces "onap" not found
No resources found.
error: resource(s) were provided, but no name, label selector, or --all flag specified
error: resource(s) were provided, but no name, label selector, or --all flag specified
======> Clearing out all RICAUX deployment resources
Error from server (NotFound): namespaces "ricaux" not found
Error from server (NotFound): namespaces "ricinfra" not found
======> Clearing out all NONRTRIC deployment resources
Error from server (NotFound): namespaces "nonrtric" not found
======> Preparing for redeployment
node/ebtic07 not labeled
node/ebtic07 not labeled
node/ebtic07 not labeled
======> Preparing working directory
===> Deploying OAM (ONAP Lite)
======> Deploying ONAP-lite
Error: unknown command "deploy" for "helm"
Run 'helm --help' for usage.
======> Waiting for ONAP-lite to reach operatoinal state
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
Anonymous
Can anyone please help me here. The tiller pods is also up and running
root@xxx07:~/.helm/plugins/deploy# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5d4dd4b4db-972pr 1/1 Running 1 26m
kube-system coredns-5d4dd4b4db-hrhkn 1/1 Running 1 26m
kube-system etcd-ebtic07 1/1 Running 1 25m
kube-system kube-apiserver-ebtic07 1/1 Running 1 25m
kube-system kube-controller-manager-ebtic07 1/1 Running 1 26m
kube-system kube-flannel-ds-2z2vx 1/1 Running 1 26m
kube-system kube-proxy-5shnx 1/1 Running 1 26m
kube-system kube-scheduler-ebtic07 1/1 Running 1 25m
kube-system tiller-deploy-666f9c57f4-rl2g9 1/1 Running 1 25m
Cedric Morin
I had a similar error when I tried to re-install SMO after first installation (~/dep/smo/bin/uninstall && ~/dep/smo/bin/install).
The only solution I found was to delete all the dep/ folder and restart the installation process from scatch. It worked, but since Onap helm charts had to be gneerated again it took hours.
Robbie Williamson
How did it work? The ~/dep/smo/bin/install script has an invalid helm command on line 185, from what I can tell:
'helm deploy' isn't a supported command.
Cedric Morin
Indeed, deploy is not a vanilla helm command, it is a plugin developed by ONAP (https://wiki.onap.org/display/DW/OOM+Helm+%28un%29Deploy+plugins).
Also, it seems that the re-installation problem may come from the fact that install operations are performed directly after uninstall ones, while some modules may not be fully uninstalled yet.
You may try to add a sleep 20 just before the helm deploy command in the install script.
Lao Alex
kube-system coredns-5d4dd4b4db-pvq4m 1/1 Running 1 6d21h
kube-system coredns-5d4dd4b4db-wwdsb 1/1 Running 1 6d21h
kube-system etcd-smo 1/1 Running 1 6d21h
kube-system kube-apiserver-smo 1/1 Running 2 6d21h
kube-system kube-controller-manager-smo 1/1 Running 14 6d21h
kube-system kube-flannel-ds-vrgwh 1/1 Running 1 6d21h
kube-system kube-proxy-cjhdn 1/1 Running 1 6d21h
kube-system kube-scheduler-smo 1/1 Running 15 6d21h
kube-system tiller-deploy-6658594489-h6zwt 1/1 Running 1 6d21h
2. helm server is running:
tcp 0 0 127.0.0.1:8879 0.0.0.0:* LISTEN 32431/helm
3."helm search onap" has 35 charts listed:
root@smo:/home/hongqun/oran/dep/smo/bin/smo-deploy/smo-oom# helm search onap | wc -l
35
4. when I run install script, I got the folloing errors:
======> Deploying ONAP-lite
fetching local/onap
Error: chart "onap" matching version "" not found in local index. (try 'helm repo update'). no chart name found
mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/charts/*': No such file or directory
mv: cannot stat '/root/.helm/plugins/deploy/cache/onap-subcharts/common': No such file or directory
rm: cannot remove '/root/.helm/plugins/deploy/cache/onap/requirements.lock': No such file or directory
mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/requirements.yaml': No such file or directory
Error: no Chart.yaml exists in directory "/root/.helm/plugins/deploy/cache/onap"
release "dev" deployed
Could you give me some advices? Thank you.
Naga chetan K M
Hi Alex,
Can you try "helm deploy" command you might get deploy plugin not found. if so you need to do "helm plugin install ~/.helm/plugins/deploy" if deploy is not found then u can download them to ~/.helm/plugins/ from (
https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/helm/plugins;hb=refs/heads/master
) and then try the install command again.
Thanks,
Naga chetan
Lao Alex
Hi Naga chetan K M:
Thank you. But the deploy and undeploy plugins are found, charts not found in cache folder.
root@smo:~/.helm/plugins# tree .
.
├── deploy
│ ├── cache
│ │ ├── onap
│ │ │ ├── computed-overrides.yaml
│ │ │ └── logs
│ │ │ ├── dev.log
│ │ │ └── dev.log.log
│ │ └── onap-subcharts
│ ├── deploy.sh
│ └── plugin.yaml
└── undeploy
├── plugin.yaml
└── undeploy.sh
6 directories, 7 files
Thanks.
Naga chetan K M
Hi Lao,
If you are not seeing charts then you can try the below procedure, this is already listed in comments,
make sure 'helm search onap' has 35 charts listed. If not, you can manually issue the 'make all' command:
dabs@smobronze:~/oran/dep/smo/bin/smo-deploy/smo-oom/kubernetes$ sudo make all
Thanks,
Naga chetan
Anonymous
Hello,
I have the same problem as Lao Alex. And even if I do $sudo make all, still it isn't working. How can I figure it out?
Thank you.
Javi G
Hi, I think SMO didn't deploy correctly first time, so the corresponding folders where not copied to the .helm folder. Try deleting smo-deploy folder and installing it again, like that it will start the installation from scratch. It can take several hours.
Makarand Kulkarni
What I found in my case is the version of helm that caused the issue - In step 2, I need to set the version as 2.16.6 in infra.rc and things work fine. This is evident from the name of the script "
Chirag Gandhi
Step 3 was going wrong. Helm was not getting initialized due to proxy server. Installed certificates and got it working.
Thanks for the help
Anonymous
我也遇到这个了则么解决的
Anonymous
Hello experts ,
I followed it step by step but I stuck by step 3 .
When I Run the script, k8s-1node-cloud-init-k ... .sh, I received the following message.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I don't know if I did something wrong or if I missed something I need to pay attention .
How can I fix it, Thank you .
Tim.
Anonymous
I met exactly the same problem. Appreciate you for the help!
Anonymous
can you pls share full debug out when running the script?
Anonymous
The whole situation as you can see below.
++ eval 'kubectl get pods -n kube-system | grep "Running" | wc -l'
+++ wc -l
+++ grep Running
+++ kubectl get pods -n kube-system
The connection to the server localhost:8080 was refused - did you specify the right host or port?
+ NUMPODS=0
+ echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
+ '[' 0 -lt 8 ']'
+ sleep 5
Tim.
Anonymous
Hi,
do you have following info in the outputs? The 'localhost:8080 refused' problem generally relates to the KUBECONFIG configuration.
Anonymous
But there is no admin.conf under /etc/kubernetes/
Anonymous
suggest re-run the script and make sure 'kubeadm init' is successful:
Anonymous
'kubeadm init' has the following error:
I0818 01:09:09.191921 32088 version.go:248] remote version is much newer: v1.18.8; falling back to: stable-1.15
[init] Using Kubernetes version: v1.15.12
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
Anonymous
disable swap and re-run the script.
Jensen Lin
Thank you for kind help! But after disabling swap and re-running the script, 'kubeadm init' has the new error:
I0819 12:54:05.234698 10134 version.go:251] remote version is much newer: v1.18.8; falling back to: stable-1.16
[init] Using Kubernetes version: v1.16.14
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.6. Latest validated version: 18.09
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10251]: Port 10251 is in use
[ERROR Port-10252]: Port 10252 is in use
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
Anonymous
it seems that you already have an incomplete k8s cluster, try:
turn off swap and re-run the script, 'sudo ./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh', which will perform 'kubeadm reset' to clear any existing k8s cluster.
Anonymous
Hi Jensen,
I am facing the exact same issue. I know that this is after a very long time, but did you figure out the solution for it? your help would be much appreciated.
Javi G
Check for errors in the script before the localhost error, in my case it has a docker version error, i had to change the version in the k8s-1node-cloud-init-k_1_15-h_2_17-d_19_03.sh file.
I have this versions in the file:
Also for this docker version you have to change the ubuntu version to "18.04.02" in the file, if not it won't find the file in the repository:
Litombe Ngomba
Hi,
If anyone still has this problem after using Javi G combinations. try
$ sudo apt-get upgrade
$ sudo apt-get update
before step 3.
Anonymous
This doesn't work still. Do you have updated scripts for latest versions of docker, helm?
Anonymous
Hii Jensen,
Have please have you solved this issue yet..
Im having this exact same issue
Anonymous
Hello there,
In step-2, when I execute the script, "gen-cloud-init.sh" after editing the 'infra.rc' file, it throws "cannot execute binary file" output as follows:
./gen-cloud-init.sh: line 39: source: /home/airvana/dep/tools/k8s/bin/../etc/.infra.rc.swo: cannot execute binary file
./gen-cloud-init.sh: line 39: source: /home/airvana/dep/tools/k8s/bin/../etc/.infra.rc.swl: cannot execute binary file
./gen-cloud-init.sh: line 39: source: /home/airvana/dep/tools/k8s/bin/../etc/.infra.rc.swp: cannot execute binary file
./gen-cloud-init.sh: line 39: source: /home/airvana/dep/tools/k8s/bin/../etc/.infra.rc.swn: cannot execute binary file
./gen-cloud-init.sh: line 39: source: /home/airvana/dep/tools/k8s/bin/../etc/.infra.rc.swm: cannot execute binary file
If I assume that is normal and move on to step-3 to execute "k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh", It throws the following errors:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
+ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
And then following,
+++ kubectl get pods -n kube-system
+++ grep Running
+++ wc -l
The connection to the server localhost:8080 was refused - did you specify the right host or port?
+ NUMPODS=0
+ echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
+ '[' 0 -lt 8 ']'
I am a novice in these technologies, your help would be much appreciated.
Thank you.
Anonymous
Hi! I have a similar problem,
error execution phase preflight: docker is required for container runtime: exec: "docker": executable file not found in $PATH
+ cd /root
+ rm -rf .kube
+ mkdir -p .kube
+ cp -i /etc/kubernetes/admin.conf /root/.kube/config
cp: no se puede efectuar `stat' sobre '/etc/kubernetes/admin.conf': No existe el archivo o el directorio
+ chown root:root /root/.kube/config
chown: no se puede acceder a '/root/.kube/config': No existe el archivo o el directorio
+ export KUBECONFIG=/root/.kube/config
+ KUBECONFIG=/root/.kube/config
+ echo KUBECONFIG=/root/.kube/config
+ kubectl get pods --all-namespaces
The connection to the server localhost:8080 was refused - did you specify the right host or port?
if someone can help me, it would be incredible!
thanks!
Chirag Gandhi
Seems like docker is not getting installed.. Recently, I am facing an issue installing docker.io 18.09.7 over ubuntu 18.04.4... try changing docker version to empty ("") instead of 18.09 in infra.rc file.. It should run fine but not sure if you will face any other issues later on, as it is not tested
Anonymous
This workaround solves the problem, thank you very much
Anonymous
Please make sure your VM has more than 2 CPUs, otherwise the docker.io will not be installed at all. I have tried ubuntu 18.04.1 to 18.04.4, the ubuntu version is not the problem. Go with latest docker can help you get the 35 charts, but then you will find the ONAP-lite cannot reach operational state. I feel that we need to wait for an update...
Javi G
Use helm version 2.17.0, it doesn't give this error anymore.
You can change it in the k8s... file in this line:
echo
"2.17.0"
>
/opt/config/helm_version
.txt
or in the "infra.rc" file in step 2 and repeat the steps to generate the new k8s file.
Rafael Matos
Javi G,
By using helm version 2.17.0, it solves the following problem?
"The connection to the server localhost:8080 was refused - did you specify the right host or port?"
Thank you.
Chirag Gandhi
Hi,
After deploying SMO, how can I open O1 dashboard? I tried opening http://localhost:8181/odlux/index.html but receiving "connection refused". Can anyone please help?
Daniel Camps Mur
Dear all,
I am getting the following error:
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Error: error initializing: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden
It seems some URLs may not be valid anymore?
Any help would be appreciated.
BR
Daniel
Daniel Camps Mur
This error can be fixed by using helm 2.17.0 as indicated by Javi G below.
BR
Daniel
Anonymous
Hi
If any of us as the normal users (out of this PL team) can really make this up and running?!???????
Full of errors and even more errors after checking and applying this cummunity "answers"
Very disapointed, Sorry but this is true !!!
Javi G
Yeah a lot of errors but finally got it working... Which errors do you have?
Anonymous
Its: " waiting for 0/8 pods running in namespaces.....
... Sleep 5"
I think it has been sleft for weeks, not 5 secs
Anonymous
Good Updates: After >10 times, I finally came up with this combination, and it worked for me:
Use: "19.03.6" for docker: echo
"19.03.6"
>
/opt/config/docker_version
.txt
Use: "2.17.0" for heml
### (Note that: I entered: 18.04.3 even though my Ubuntu Desktop is: 18.04.2)DOCKERVERSION=
"${DOCKERV}-0ubuntu1~18.04.3
Thank you Javi G.
I hope the new comers can make this works, or try to use the above combination to make it work
mhz
Hi
I am facing the same problem:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
+ NUMPODS=0
+ echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
+ '[' 0 -lt 8 ']'
+ sleep 5
I have implemented all suggested solutions in the previous comments as well as the approach used in https://blog.csdn.net/jeffyko/article/details/107030704?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522162932197216780261968421%2522%252C%2522scm%2522%253A%252220140713.130102334..%2522%257D&request_id=162932197216780261968421&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduend~default-1-107030704.first_rank_v2_pc_rank_v29&utm_term=O-RAN+notes%282%29&spm=1018.2226.3001.4187.
But still have the same issue. Could you please help?
Thanks
Pavan Gupta
Hi,
I am following deployment steps for SMO. Upon running the k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh script, I come across the following error. Kindly share the solution if anyone has found it.
E: Version '18.09.7-0ubuntu1~18.04.4' for 'docker.io' was not found
+ cat
./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh: line 156: /etc/docker/daemon.json: No such file or directory
+ mkdir -p /etc/systemd/system/docker.service.d
+ systemctl enable docker.service
Failed to enable unit: Unit file docker.service does not exist.
Javi G
Change this in the k8s... file:
echo
"19.03.6"
>
/opt/config/docker_version
.txt
Also for this docker version you have to change the ubuntu version to "18.04.02" in the file, if not it won't find the file in the repository:
elif [[ ${UBUNTU_RELEASE} ==
18
.* ]]; then
echo
"Installing on Ubuntu $UBUNTU_RELEASE (Bionic Beaver)"
if
[ ! -z
"${DOCKERV}"
]; then
DOCKERVERSION=
"${DOCKERV}-0ubuntu1~18.04.3"
#THIS IS THE LINE I MODIFIED
Not every docker version can be found in the repository, so i found that version 19.03.6 for ubuntu 18.04.3 is available, and works well. The 18.04.2 worked before but throws another error now.
Pavan Gupta
Many thanks that worked.
Pavan Gupta
Hello,
I have come across the following error while installing SMO. if anyone has a solution for it, kindly share the same.
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Error: error initializing: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden
+ helm init -c
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Error: error initializing: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden
Regards,
Pavan
Travis Machacek
I ran into this issue as well and this command solved it:
Pavan Gupta
Hi Travis, I have tried your command '
helm init --stable-repo-url=https://charts.helm.sh/stable --client-only' but face the same issue.Is it required to rerun everything from the start?
Travis Machacek
I believe so, yes. If that command still does not work, try another of the commands mentioned in the link I posted. I'm assuming this happened after running the k8 script?
Pavan Gupta
Yes, this happened after running the k8s script. I am trying the second command mentioned in the link, will update.
Pavan Gupta
Hi Travis, did u run this cmd separately or k8s script was modified? If so, kindly share the change made in the k8s file.
Travis Machacek
It was run separately.
Pavan Gupta
Using helm version 2.17.0 worked. Thank you for your help.
Javi G
Try using helm version 2.17.0.
You can change it in the k8s... file in this line:
echo
"2.17.0"
>
/opt/config/helm_version
.txt
or in the "infra.rc" file in step 2 and repeat the steps to generate the new k8s file.
Pavan Gupta
Thank you Javi, that worked.
Daniel Camps Mur
Dear all,
After following the indicated steps (using helm 2.17.0 to avoid an error), the deployment of dev-sdnc fails and I get stuck here:
======> Deploying ONAP-lite
fetching local/onap
release "dev" deployed
release "dev-consul" deployed
release "dev-dmaap" deployed
release "dev-mariadb-galera" deployed
release "dev-msb" deployed
release "dev-sdnc" deployed
dev-sdnc 1 Sat Jan 9 12:18:06 2021 FAILED sdnc-6.0.0 onap
======> Waiting for ONAP-lite to reach operatoinal state
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
...
Any help would be appreciated.
BR
Daniel
Nagachetan Km
Hi Daniel,
Please can you check whether docker image is getting pulled or not (kubectl describe po <pod name> -n onap), I faced the same issue and saw docker image was not getting pulled because for every 6 hours only 100 pulls are allowed in docker ce and k8s exceeds those many tries sometimes.
Thanks,
Naga chetan
Daniel Camps Mur
Hi Nega,
There seems to be an error when pulling the image. In fact for many of the onap services.
I tried restarting the installation process from scracth and it failed again the same place. How did you manage to solve it?
This is what I get when I list the pods.
onap dev-consul-68d576d55c-t4cp9 1/1 Running 0 124m
onap dev-consul-server-0 1/1 Running 0 124m
onap dev-consul-server-1 1/1 Running 0 123m
onap dev-consul-server-2 1/1 Running 0 122m
onap dev-kube2msb-9fc58c48-qlw5q 0/1 Init:ErrImagePull 0 124m
onap dev-mariadb-galera-0 0/1 Init:ImagePullBackOff 0 124m
onap dev-message-router-0 0/1 Init:ImagePullBackOff 0 124m
onap dev-message-router-kafka-0 0/1 Init:ErrImagePull 0 124m
onap dev-message-router-kafka-1 0/1 Init:ImagePullBackOff 0 124m
onap dev-message-router-kafka-2 0/1 Init:ImagePullBackOff 0 124m
onap dev-message-router-zookeeper-0 0/1 Init:ImagePullBackOff 0 124m
onap dev-message-router-zookeeper-1 0/1 ImagePullBackOff 0 124m
onap dev-message-router-zookeeper-2 0/1 ErrImagePull 0 124m
onap dev-msb-consul-65b9697c8b-ldpqp 1/1 Running 0 124m
onap dev-msb-discovery-54b76c4898-2z7vp 0/2 ErrImagePull 0 124m
onap dev-msb-eag-76d4b9b9d7-l5m75 0/2 Init:ErrImagePull 0 124m
onap dev-msb-iag-65c59cb86b-lsjkl 0/2 Init:ErrImagePull 0 124m
onap dev-sdnc-0 0/2 Init:ErrImagePull 0 124m
onap dev-sdnc-db-0 0/1 Init:ImagePullBackOff 0 124m
onap dev-sdnc-dmaap-listener-5c77848759-r2fnp 0/1 Init:ImagePullBackOff 0 124m
onap dev-sdnc-sdnrdb-init-job-dmq6r 0/1 Init:Error 0 124m
onap dev-sdnc-sdnrdb-init-job-wlvqp 0/1 Init:ImagePullBackOff 0 102m
onap dev-sdnrdb-coordinating-only-9b9956fc-fzwbh 0/2 Init:ImagePullBackOff 0 124m
onap dev-sdnrdb-master-0 0/1 Init:ErrImagePull 0 124m
Output of kubectl describe po dev-sdnc-0 -n onap:
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 23m (x59 over 104m) kubelet, vagrant Back-off pulling image "oomk8s/readiness-check:2.2 .1"
Warning Failed 15m (x12 over 104m) kubelet, vagrant Error: ErrImagePull
Warning Failed 13m (x78 over 104m) kubelet, vagrant Error: ImagePullBackOff
Warning Failed 4m56s (x13 over 104m) kubelet, vagrant Failed to pull image "oomk8s/readiness-check:2.2.1 ": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io: Temporary failure in name resolution
BR
Daniel
Chris Ko
I had the exact same logs. Then I found out those 'release "xxx" deployed' were not deployed at all. You might want to check deploy logs, which located in ~/.helm/plugin/deploy/cache/*/logs . In my case, it complains something about 'no matches for kind "StatefulSet" in version'. It was solved after I switch K8S version to 1.15.9.
This is infra.rc works for me:
INFRA_DOCKER_VERSION=""
INFRA_HELM_VERSION="2.17.0"
INFRA_K8S_VERSION="1.15.9"
INFRA_CNI_VERSION="0.7.5"
I left docker version blank so I don't have to edit the script as mentioned above. 19.03.6 for ubuntu 18.04.3 is installed at this time.
Daniel Camps Mur
Thanks Chris. I tried a fresh installation with your infra.rc configuration but still got the same error.
BR
Daniel
Naga chetan K M
Hi Daniel,
The last error says it has name resolution failure, you can fix this by adding "nameserver 8.8.8.8" in /etc/resolv.conf file.
Thanks,
Naga chetan
Nagachetan Km
Hi All,
Is there a slack group for discussions ?
Thanks,
Naga chetan
Pavan Gupta
Hello,
SMO installation was successful. I have run it on AWS instance. Any idea how one can check out SMO functioning? I mean any curl command or web page that can be tried?
Pavan
Javi G
Hi, I'm not sure about that. What I did was to deploy a RIC in another VM and configure it to work with the nonrtric, like that you can know that both nonrtric and ric are working...
Gaurav jain
Hi Pavan,
What you want to check in SMO ? Normally we are using DMAAP and VES in SMO. That can be curled and checked
Anonymous
Good Updates: After >10 times, I finally came up with this combination, and it worked for me:
Use: "19.03.6" for docker: echo
"19.03.6"
>
/opt/config/docker_version
.txt
Use: "2.17.0" for heml
### (Note that: I entered: 18.04.3 even though my Ubuntu Desktop is: 18.04.2)DOCKERVERSION=
"${DOCKERV}-0ubuntu1~18.04.3
Thank you Javi G.
I hope the new comers can make this works, or try to use the above combination to make it work
Anonymous
After Step 1 and Before Step #2 Change the following files
dep/tools/k8s/etc/infra.rc
INFRA_DOCKER_VERSION="19.03.6"
INFRA_HELM_VERSION="2.17.0"
INFRA_K8S_VERSION="1.15.9"
INFRA_CNI_VERSION="0.7.5"
dep/tools/k8s/heat/scripts/k8s_vm_install.sh
.
.
.
.
.
.
.
elif [[ ${UBUNTU_RELEASE} == 18.* ]]; then
echo "Installing on Ubuntu $UBUNTU_RELEASE (Bionic Beaver)"
if [ ! -z "${DOCKERV}" ]; then
DOCKERVERSION="${DOCKERV}-0ubuntu1~18.04.3"
fi
.
.
dep/smo/bin/install
.
.
if [ "$1" == "initlocalrepo" ]; then
echo && echo "===> Initialize local Helm repo"
rm -rf ~/.helm #&& helm init -c # without this step helm serve may not work.
helm init --stable-repo-url=https://charts.helm.sh/stable --client-only
helm serve &
helm repo add local http://127.0.0.1:8879
fi
.
.
Then run Step #2, Step #3 and Step #4 normally.
Anonymous
OS was 18.04.5 LTS (Bionic Beaver)
Anonymous
Can anyone merge these changes to github
Anonymous
Hi,
I need help on running dcaemod using SMO Lite . Can i know what all other ONAP components should be enabled for running dcaemod ? Any kind of dependency on other modules of ONAP
Anonymous
I've started to deploy SMO with ubuntu 20.04 desktop but an error ocurrs when try to install smo.
It's not allow this last ubuntu version?
Pavan Gupta
I have tried with Ubuntu 18.04 and it had worked. Try switching to this version or share the error it reports.
Litombe Ngomba
Hi,
I don't think the bronze version supports ubuntu 20.04.
switching to 18.04 is a good idea.
Anonymous
hi every,
i execute this instruction,kubectl get pods --all-namespaces,
but i got one place strange, I have tag it into red .
Could anyone help me to solve this problem?
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5d4dd4b4db-82gnh 1/1 Running 2 128m
kube-system coredns-5d4dd4b4db-8mrqh 1/1 Running 2 128m
kube-system etcd-smo-virtualbox 1/1 Running 2 127m
kube-system kube-apiserver-smo-virtualbox 1/1 Running 2 127m
kube-system kube-controller-manager-smo-virtualbox 1/1 Running 2 127m
kube-system kube-flannel-ds-9xr8x 1/1 Running 2 128m
kube-system kube-proxy-lwmnh 1/1 Running 2 128m
kube-system kube-scheduler-smo-virtualbox 1/1 Running 2 127m
kube-system tiller-deploy-7c54c6988b-jgrqg 1/1 Running 2 127m
nonrtric a1-sim-osc-0 1/1 Running 0 16m
nonrtric a1-sim-osc-1 1/1 Running 0 4m47s
nonrtric a1-sim-std-0 1/1 Running 0 16m
nonrtric a1-sim-std-1 1/1 Running 0 5m6s
nonrtric a1-sim-std2-0 1/1 Running 0 16m
nonrtric a1-sim-std2-1 1/1 Running 0 15m
nonrtric a1controller-64c4f59fb5-tdllp 1/1 Running 0 16m
nonrtric controlpanel-78f957844b-nb5qc 1/1 Running 0 16m
nonrtric db-549ff9b4d5-58rmc 1/1 Running 0 16m
nonrtric enrichmentservice-7559b45fd-2wb7p 1/1 Running 0 16m
nonrtric nonrtricgateway-6478f59b66-wjjkz 1/1 Running 0 16m
nonrtric policymanagementservice-79f8f76d8f-ch46v 1/1 Running 0 16m
nonrtric rappcatalogueservice-5945c4d84b-7w7cc 1/1 Running 0 16m
onap dev-consul-68d576d55c-l2n2q 1/1 Running 0 53m
onap dev-consul-server-0 1/1 Running 0 53m
onap dev-consul-server-1 1/1 Running 0 52m
onap dev-consul-server-2 1/1 Running 0 52m
onap dev-kube2msb-9fc58c48-s5g98 1/1 Running 0 52m
onap dev-mariadb-galera-0 1/1 Running 0 53m
onap dev-mariadb-galera-1 1/1 Running 0 46m
onap dev-mariadb-galera-2 1/1 Running 0 39m
onap dev-message-router-0 1/1 Running 0 53m
onap dev-message-router-kafka-0 1/1 Running 0 53m
onap dev-message-router-kafka-1 1/1 Running 0 53m
onap dev-message-router-kafka-2 1/1 Running 0 53m
onap dev-message-router-zookeeper-0 1/1 Running 0 53m
onap dev-message-router-zookeeper-1 1/1 Running 0 53m
onap dev-message-router-zookeeper-2 1/1 Running 0 53m
onap dev-msb-consul-65b9697c8b-6hhck 1/1 Running 0 52m
onap dev-msb-discovery-54b76c4898-xz7rq 2/2 Running 0 52m
onap dev-msb-eag-76d4b9b9d7-jfms9 2/2 Running 0 52m
onap dev-msb-iag-65c59cb86b-864jr 2/2 Running 0 52m
onap dev-sdnc-0 2/2 Running 0 52m
onap dev-sdnc-db-0 1/1 Running 0 52m
onap dev-sdnc-dmaap-listener-5c77848759-svmzg 1/1 Running 0 52m
onap dev-sdnc-sdnrdb-init-job-d6rjm 0/1 Completed 0 41m
onap dev-sdnc-sdnrdb-init-job-qkdl8 0/1 Init:Error 0 52m
onap dev-sdnrdb-coordinating-only-9b9956fc-zjqkg 2/2 Running 0 52m
onap dev-sdnrdb-master-0 1/1 Running 0 52m
onap dev-sdnrdb-master-1 1/1 Running 0 40m
onap dev-sdnrdb-master-2 1/1 Running 0 38m
ricaux bronze-infra-kong-68657d8dfd-d5bb4 2/2 Running 1 4m57s
ricaux deployment-ricaux-ves-65db844758-td2zc 1/1 Running 0 4m47s
BR,
Tim
Anonymous
There seems to be a mismatch in the VM minimum requirements on this page and the PDF linked in that section. The RAM required is mentioned as 32 GB and the CPU cores as 8; however, the "Getting Started PDF" mentions the required RAM as 16 GB and CPU cores as 4. So how much minimum RAM and CPU cores does the SMO VM actually require? Also, if these are the minimum requirements, what are the standard or recommended VM requirements? I suspect that the Getting Started PDF for Near Realtime RIC installation has been mistakenly linked here. Could someone confirm this?
Thanks,
Vikas Krishnan
Rafael Matos
Hello guys,
In step 3, i have this problem:
++ eval 'kubectl get pods -n kube-system | grep "Running" | wc -l'
+++ wc -l
+++ grep Running
+++ kubectl get pods -n kube-system
The connection to the server localhost:8080 was refused - did you specify the right host or port?
+ NUMPODS=0
+ echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
+ '[' 0 -lt 8 ']'
+ sleep 5
I tried to solve this problem by reading the comments related to this subject but this issue still remains...
My Ubuntu version is 18.04.1 LTS and i follow the previous steps.
Can someone help please?!
Rafael
Mooga Wilson
I just fiddled around with infra.rc and it eventually worked for me This is what I had:
#INFRA_DOCKER_VERSION=""
#INFRA_K8S_VERSION="1.15.9"
#INFRA_CNI_VERSION="0.7.5"
#INFRA_HELM_VERSION="2.17.0"
anjan goswami
Hi All
I am trying to install cherry release. However compared to bronze I don't see "dep/smo/bin" folder anymore. So I can't run the install command. Can anyone suggest alternative? All steps till step-3 is successful, only step-4 is not valid for cherry release.
Rafael Matos
Hello Anjan
How can you install the cherry release?
In step 1 you just made this change (cherry) in the following command?
SMO installation was successful with the bronze release... so i can't asnwer you question i am sorry.
Rafael
anjan goswami
Yes, that's exactly I did, I also changed infra.rc file. I am also able to run RIC related services on a separate machine, but its SMO which is not getting installed. What are the steps to upgrade from bronze to cherry release?
anjan goswami
Here is how i resolved the issue.
unlike Bronze, Cherry doesn't have a single script (dep/smo/bin/install) which install onap, nonrtric & ric-aux. Instead they needed to be installed individually.
Vikas Krishnan Radhakrishnan
How did you resolve the issue? Could you elaborate what you did to install the onap, nonrtric & rix-aux individually?
anjan goswami
Cherry installation process differs compared to bronze in many ways as below, however it still follows many steps as bronze
1. Use below clone command
git clone http://gerrit.o-ran-sc.org/r/it/dep -b cherry
2. Update “dep/tools/k8s/etc/infra.rc”
incorporating below versions.
INFRA_DOCKER_VERSION=""
INFRA_K8S_VERSION="1.15.9"
INFRA_CNI_VERSION="0.7.5"
INFRA_HELM_VERSION="2.17.0"
3. Update Ubuntu Version in “dep/tools/k8s/bin/k8s-1node-cloud-init-k_1_15-h_2_17-d_cur.sh” as
elif [[ ${UBUNTU_RELEASE} == 18.* ]]; then
echo "Installing on Ubuntu $UBUNTU_RELEASE (Bionic Beaver)"
if [ ! -z "${DOCKERV}" ]; then
DOCKERVERSION="${DOCKERV}-0ubuntu1~18.04.3"
fi
Note: Unlike Bronze, in Cherry there is no comprehensive installation file “dep/smo/bin/install” which install all SMO components i.e. ONAP , NONRTRIC, RIC-AUX & RIC-INFRA; instead they need to be manually installed in the order of
ONAP -> NONRTRIC -> RIC-AUX.
4. Install ONAP first, this would take a very long time, up to 6-7 hours, don’t give up!
Uncomment all commented sections in file "dep/tools/onap/install"
cd dep/tools/onap/
./install initlocalrepo
Its quite possible that the execution would get stuck in below line
4/7 SDNC-SDNR pods and 7/7 Message Router pods running
In such situation stop the execution and proceed to manually perform the remaining steps in the scripts which are basically below steps. Otherwise please ignore step-5,6 which are automatically part of install script
5. Install non-RT-RIC
a.
Update “baseUrl” in “dep/nonrtric/RECIPE_EXAMPLE/example_recipe.yaml”
b.
cd dep/bin
c.
./deploy-nonrtric -f ~/dep/nonrtric/RECIPE_EXAMPLE/example_recipe.yaml
d.
Wait until “kubectl get pods -n nonrtric” all pods are running
6.
Install ric-aux & ric-infra
kubectl create ns ricinfra
cd ~/dep/ric-aux/helm/infrastructure
helm dep update
cd ../
helm install -f ~/dep/RECIPE_EXAMPLE/AUX/example_recipe.yaml
--name cherry-infra --namespace ricaux ./infrastructure
cd ves/
helm dep update
cd ..
helm install -f ~/dep/RECIPE_EXAMPLE/AUX/example_recipe.yaml
--name cherry-ves --namespace ricaux ./ves
Wait until ric-aux & ric-infra pods are up & running, verify using
“kubectl get pods --all-namespaces”
Vikas Krishnan Radhakrishnan
Thanks a lot for the guidance! The installation was a breeze following the steps you mentioned.
Anonymous
Hi, Is it possible to use another orchestrator instead of ONAP? (In step 4 in this instruction.)
thank you
Vineeth Kumar
Oh Anjan, This was smooth for me. Did the simulation on couple of VMs. It took 6 hours but worked well.
Information on some URLs that we can use to test, or some services that we can expose to see if all is well, will complete this set of instructions.
Thank You.
Vineeth
Deepak Dutt
Hi Team,
While using above cherry installation. I am facing some issue while checking with below
"kubectl get pods --all-namespaces"
kube-system coredns-5d4dd4b4db-fbj8d 1/1 Running 1 3d
kube-system coredns-5d4dd4b4db-mwrld 1/1 Running 1 3d
kube-system etcd-ubuntu 1/1 Running 3 3d
kube-system kube-apiserver-ubuntu 1/1 Running 2 3d
kube-system kube-controller-manager-ubuntu 1/1 Running 1 3d
kube-system kube-flannel-ds-rpqjt 1/1 Running 1 3d
kube-system kube-proxy-dnkbw 1/1 Running 1 3d
kube-system kube-scheduler-ubuntu 1/1 Running 1 3d
kube-system tiller-deploy-7c54c6988b-x4tkm 1/1 Running 1 3d
nonrtric a1-sim-osc-0 0/1 ImagePullBackOff 0 2d20h
nonrtric a1-sim-std-0 0/1 ErrImagePull 0 2d20h
nonrtric a1controller-64c4f59fb5-dbt4q 0/1 ImagePullBackOff 0 2d20h
nonrtric controlpanel-5ff4849bf-fp5bm 0/1 ImagePullBackOff 0 2d20h
nonrtric db-549ff9b4d5-rqbjp 1/1 Running 0 2d20h
nonrtric enrichmentservice-8578855744-hqxww 0/1 ImagePullBackOff 0 2d20h
nonrtric policymanagementservice-77c59f6cfb-lwrzw 0/1 ImagePullBackOff 0 2d20h
nonrtric rappcatalogueservice-5945c4d84b-2z875 0/1 ImagePullBackOff 0 2d20h
onap dev-consul-68d576d55c-wdnz6 1/1 Running 0 2d21h
onap dev-consul-server-0 1/1 Running 0 2d21h
onap dev-consul-server-1 1/1 Running 0 2d19h
onap dev-consul-server-2 1/1 Running 0 2d19h
onap dev-kube2msb-9fc58c48-vrckm 0/1 Init:0/1 405 2d21h
onap dev-mariadb-galera-0 0/1 Init:ImagePullBackOff 0 2d21h
onap dev-message-router-0 0/1 Init:0/1 406 2d21h
onap dev-message-router-kafka-0 0/1 Init:1/4 405 2d21h
onap dev-message-router-kafka-1 0/1 Init:1/4 405 2d21h
onap dev-message-router-kafka-2 0/1 Init:1/4 405 2d21h
onap dev-message-router-zookeeper-0 0/1 ImagePullBackOff 0 2d21h
onap dev-message-router-zookeeper-1 0/1 ImagePullBackOff 0 2d21h
onap dev-message-router-zookeeper-2 0/1 ImagePullBackOff 0 2d21h
onap dev-msb-consul-65b9697c8b-zdktr 1/1 Running 0 2d21h
onap dev-msb-discovery-54b76c4898-mrl2j 1/2 ImagePullBackOff 0 2d21h
onap dev-msb-eag-76d4b9b9d7-fqcff 0/2 Init:0/1 406 2d21h
onap dev-msb-iag-65c59cb86b-5ql7w 0/2 Init:0/1 405 2d21h
onap dev-sdnc-0 0/2 Init:1/3 405 2d21h
onap dev-sdnc-db-0 0/1 Init:ImagePullBackOff 0 2d21h
onap dev-sdnc-dmaap-listener-5c77848759-qfd8f 0/1 Init:1/2 405 2d21h
onap dev-sdnc-sdnrdb-init-job-5xjfv 0/1 Init:Error 0 2d21h
onap dev-sdnc-sdnrdb-init-job-drp5m 0/1 ImagePullBackOff 0 2d21h
onap dev-sdnrdb-coordinating-only-9b9956fc-dglbh 2/2 Running 0 2d21h
onap dev-sdnrdb-master-0 1/1 Running 0 2d21h
onap dev-sdnrdb-master-1 1/1 Running 0 2d21h
onap dev-sdnrdb-master-2 1/1 Running 0 2d21h
ricaux cherry-infra-kong-7d6555954f-b56v2 2/2 Running 19 2d20h
ricaux deployment-ricaux-ves-6c8db5db65-kfltw 0/1 ImagePullBackOff 0 2d20h
Deepak Dutt
Hi Team,
While using above cherry installation. I am facing some issue while checking with below.
Some Pods are getting stuck in imagepullback. Our Vm having internet access as well
"kubectl get pods --all-namespaces"
kube-system coredns-5d4dd4b4db-fbj8d 1/1 Running 1 3d
kube-system coredns-5d4dd4b4db-mwrld 1/1 Running 1 3d
kube-system etcd-ubuntu 1/1 Running 3 3d
kube-system kube-apiserver-ubuntu 1/1 Running 2 3d
kube-system kube-controller-manager-ubuntu 1/1 Running 1 3d
kube-system kube-flannel-ds-rpqjt 1/1 Running 1 3d
kube-system kube-proxy-dnkbw 1/1 Running 1 3d
kube-system kube-scheduler-ubuntu 1/1 Running 1 3d
kube-system tiller-deploy-7c54c6988b-x4tkm 1/1 Running 1 3d
nonrtric a1-sim-osc-0 0/1 ImagePullBackOff 0 2d20h
nonrtric a1-sim-std-0 0/1 ErrImagePull 0 2d20h
nonrtric a1controller-64c4f59fb5-dbt4q 0/1 ImagePullBackOff 0 2d20h
nonrtric controlpanel-5ff4849bf-fp5bm 0/1 ImagePullBackOff 0 2d20h
nonrtric db-549ff9b4d5-rqbjp 1/1 Running 0 2d20h
nonrtric enrichmentservice-8578855744-hqxww 0/1 ImagePullBackOff 0 2d20h
nonrtric policymanagementservice-77c59f6cfb-lwrzw 0/1 ImagePullBackOff 0 2d20h
nonrtric rappcatalogueservice-5945c4d84b-2z875 0/1 ImagePullBackOff 0 2d20h
onap dev-consul-68d576d55c-wdnz6 1/1 Running 0 2d21h
onap dev-consul-server-0 1/1 Running 0 2d21h
onap dev-consul-server-1 1/1 Running 0 2d19h
onap dev-consul-server-2 1/1 Running 0 2d19h
onap dev-kube2msb-9fc58c48-vrckm 0/1 Init:0/1 405 2d21h
onap dev-mariadb-galera-0 0/1 Init:ImagePullBackOff 0 2d21h
onap dev-message-router-0 0/1 Init:0/1 406 2d21h
onap dev-message-router-kafka-0 0/1 Init:1/4 405 2d21h
onap dev-message-router-kafka-1 0/1 Init:1/4 405 2d21h
onap dev-message-router-kafka-2 0/1 Init:1/4 405 2d21h
onap dev-message-router-zookeeper-0 0/1 ImagePullBackOff 0 2d21h
onap dev-message-router-zookeeper-1 0/1 ImagePullBackOff 0 2d21h
onap dev-message-router-zookeeper-2 0/1 ImagePullBackOff 0 2d21h
onap dev-msb-consul-65b9697c8b-zdktr 1/1 Running 0 2d21h
onap dev-msb-discovery-54b76c4898-mrl2j 1/2 ImagePullBackOff 0 2d21h
onap dev-msb-eag-76d4b9b9d7-fqcff 0/2 Init:0/1 406 2d21h
onap dev-msb-iag-65c59cb86b-5ql7w 0/2 Init:0/1 405 2d21h
onap dev-sdnc-0 0/2 Init:1/3 405 2d21h
onap dev-sdnc-db-0 0/1 Init:ImagePullBackOff 0 2d21h
onap dev-sdnc-dmaap-listener-5c77848759-qfd8f 0/1 Init:1/2 405 2d21h
onap dev-sdnc-sdnrdb-init-job-5xjfv 0/1 Init:Error 0 2d21h
onap dev-sdnc-sdnrdb-init-job-drp5m 0/1 ImagePullBackOff 0 2d21h
onap dev-sdnrdb-coordinating-only-9b9956fc-dglbh 2/2 Running 0 2d21h
onap dev-sdnrdb-master-0 1/1 Running 0 2d21h
onap dev-sdnrdb-master-1 1/1 Running 0 2d21h
onap dev-sdnrdb-master-2 1/1 Running 0 2d21h
ricaux cherry-infra-kong-7d6555954f-b56v2 2/2 Running 19 2d20h
ricaux deployment-ricaux-ves-6c8db5db65-kfltw 0/1 ImagePullBackOff 0 2d20h
Anonymous
do you get solution??
ricaux deployment-ricaux-ves-6c8db5db65-kfltw 0/1 ImagePullBackOff 0 2d20h
Anonymous
Hi Anjan,
in step 5 a. Update “baseUrl” in “dep/nonrtric/RECIPE_EXAMPLE/example_recipe.yaml”
What should I update in the example_reciple.yaml ?
I am using Cherry release and example_reciple.yaml for nonrtric seems to little different.
user-60c62
Hi Anjan Goswami,
I am getting error in step 4, i.e. ./install initlocalrepo. I ran this command after un-commenting sections in dep/tools/onap/install.
The errors are the repetition of following lines:-
0/1 A1 controller pods, 0/1 ControlPanel pods,
0/1 DB pods, 0/1 PolicyManagementService pods,
and 0/4 A1 sim pods running
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
Do you know how to fix this issue??? Anyone knows, please help me out here.
My ubuntu version is 18.04.6, #CPU= 4.
Anonymous
Hi
Kindly tell what kind of changes in baseurl you mentioned in example_recipe.yaml ??
Nhan Nguyen Ngoc
Seemly, the mismatch between k8s version , docker version and helm version is causing many issues. Could anyone update the latest load line? I'm using Ubuntu server 18.04.05 and not able to done the steps successfully.
Alexis Duque
Can the SMO be deployed on AWS EKS or on Google Kubernetes Engine? Has someone already tried this?
Deepak Dutt
Hi Experts,
While running the install script getting below error
error: resource(s) were provided, but no name, label selector, or --all flag specified
error: resource(s) were provided, but no name, label selector, or --all flag specified
======> Clearing out all RICAUX deployment resources
Error from server (NotFound): namespaces "ricaux" not found
Error from server (NotFound): namespaces "ricinfra" not found
======> Clearing out all NONRTRIC deployment resources
Error from server (NotFound): namespaces "nonrtric" not found
======> Preparing for redeployment
1) Also tiller pod is running
kube-system tiller-deploy-7d7bc87bb-fvr6v 1/1 Running 1 23h
2) helm search onap | wc -l' has 35 charts listed after manually 'make all' command.
Please next action plan to solve the problem.
Br
Deepak
Yevhenii Mormul
Hello,
I am having issues with nonrtric pods/ This is an output of errors in install script:
===> Deploy NONRTRIIC
...
Error: open /root/.helm/repository/local/index.yaml: no such file or directory
+ cp /tmp/nonrtric-common-2.0.0.tgz /root/.helm/repository/local/
+ helm repo index /root/.helm/repository/local/
+ helm repo remove local
Error: no repo named "local" found
+ helm repo add local http://127.0.0.1:8879/charts
"local" has been added to your repositories
...
Downloading nonrtric-common from repo http://127.0.0.1:8879/charts
Deleting outdated charts
Successfully packaged chart and saved it to: /tmp/nonrtricgateway-1.0.0.tgz
No requirements found in /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric/charts.
Error: unknown flag: --force-update
Finished Packaging NONRTRIC components [controlpanel a1controller a1simulator policymanagementservice enrichmentservice rappcatalogueservice nonrtricgateway]
...
Downloading nonrtric-common from repo http://127.0.0.1:8879/charts
Deleting outdated charts
Successfully packaged chart and saved it to: /tmp/nonrtricgateway-1.0.0.tgz
No requirements found in /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric/charts.
Error: unknown flag: --force-update
Finished Packaging NONRTRIC components [controlpanel a1controller a1simulator policymanagementservice enrichmentservice rappcatalogueservice nonrtricgateway]
namespace/nonrtric created
configmap/nonrtric-recipe created
Deploying NONRTRIC
helm install -f /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml --namespace nonrtric --name r2-dev-nonrtric /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/bin/../nonrtric/helm/nonrtric
Error: release r2-dev-nonrtric failed: no objects visited
======> Waiting for NONRTRIC to reach operatoinal state
example_recipe.yaml and
When trying to run the last line manually this is what i get:
purging and restarting again doesnt help
I made a folder in /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric called charts and copied Chart.yaml and values.yaml from nonrtric into it. Then i tried running the install script and i got:
======> Deploying ONAP-lite
fetching local/onap
Error: chart "onap" matching version "" not found in local index. (try 'helm repo update'). no chart name found
mv: cannot stat '/home/yev/.helm/plugins/deploy/cache/onap/charts/*': No such file or directory
mv: cannot stat '/home/yev/.helm/plugins/deploy/cache/onap-subcharts/common': No such file or directory
rm: cannot remove '/home/yev/.helm/plugins/deploy/cache/onap/requirements.lock': No such file or directory
mv: cannot stat '/home/yev/.helm/plugins/deploy/cache/onap/requirements.yaml': No such file or directory
Error: no Chart.yaml exists in directory "/home/yev/.helm/plugins/deploy/cache/onap"
release "dev" deployed
So I had to run sudo make all in /media/yev/UUI/Oran/dep/smo/bin/smo-deploy/smo-oom/kubernetes
and I made sure that my copied charts do exist afterwards. Then I ran the install script again and I saw a different error:
Chart.yaml::
#here are the other dependecies which all follow the same pattern except
Junior Salem
Hi Experts,
I faced some issues, ricaux bronzeinfra kong is cant be Running
ricaux bronze-infra-kong-68657d8dfd-285kf 0/2 ImagePullBackOff 0 11s
ricaux deployment-ricaux-ves-65db844758-zwp8f 1/1 Running 0 17m
the logs says:
Error from server (BadRequest): a container name must be specified for pod bronze-infra-kong-68657d8dfd-285kf, choose one of: [ingress-controller proxy]
i have tried to check it out using
kubectl logs -n ricaux bronze-infra-kong-68657d8dfd-285kf -c ingress-controller
and it says
Error from server (BadRequest): container "ingress-controller" in pod "bronze-infra-kong-68657d8dfd-285kf" is waiting to start: trying and failing to pull image
Could anyone please help?
Br,
Junior
Deepak Dutt
Same error i am getting for cherry release as well. Can some help an update.
Hi Team,
While using above cherry installation. I am facing some issue while checking with below
"kubectl get pods --all-namespaces"
kube-system coredns-5d4dd4b4db-fbj8d 1/1 Running 1 3d
kube-system coredns-5d4dd4b4db-mwrld 1/1 Running 1 3d
kube-system etcd-ubuntu 1/1 Running 3 3d
kube-system kube-apiserver-ubuntu 1/1 Running 2 3d
kube-system kube-controller-manager-ubuntu 1/1 Running 1 3d
kube-system kube-flannel-ds-rpqjt 1/1 Running 1 3d
kube-system kube-proxy-dnkbw 1/1 Running 1 3d
kube-system kube-scheduler-ubuntu 1/1 Running 1 3d
kube-system tiller-deploy-7c54c6988b-x4tkm 1/1 Running 1 3d
nonrtric a1-sim-osc-0 0/1 ImagePullBackOff 0 2d20h
nonrtric a1-sim-std-0 0/1 ErrImagePull 0 2d20h
nonrtric a1controller-64c4f59fb5-dbt4q 0/1 ImagePullBackOff 0 2d20h
nonrtric controlpanel-5ff4849bf-fp5bm 0/1 ImagePullBackOff 0 2d20h
nonrtric db-549ff9b4d5-rqbjp 1/1 Running 0 2d20h
nonrtric enrichmentservice-8578855744-hqxww 0/1 ImagePullBackOff 0 2d20h
nonrtric policymanagementservice-77c59f6cfb-lwrzw 0/1 ImagePullBackOff 0 2d20h
nonrtric rappcatalogueservice-5945c4d84b-2z875 0/1 ImagePullBackOff 0 2d20h
onap dev-consul-68d576d55c-wdnz6 1/1 Running 0 2d21h
onap dev-consul-server-0 1/1 Running 0 2d21h
onap dev-consul-server-1 1/1 Running 0 2d19h
onap dev-consul-server-2 1/1 Running 0 2d19h
onap dev-kube2msb-9fc58c48-vrckm 0/1 Init:0/1 405 2d21h
onap dev-mariadb-galera-0 0/1 Init:ImagePullBackOff 0 2d21h
onap dev-message-router-0 0/1 Init:0/1 406 2d21h
onap dev-message-router-kafka-0 0/1 Init:1/4 405 2d21h
onap dev-message-router-kafka-1 0/1 Init:1/4 405 2d21h
onap dev-message-router-kafka-2 0/1 Init:1/4 405 2d21h
onap dev-message-router-zookeeper-0 0/1 ImagePullBackOff 0 2d21h
onap dev-message-router-zookeeper-1 0/1 ImagePullBackOff 0 2d21h
onap dev-message-router-zookeeper-2 0/1 ImagePullBackOff 0 2d21h
onap dev-msb-consul-65b9697c8b-zdktr 1/1 Running 0 2d21h
onap dev-msb-discovery-54b76c4898-mrl2j 1/2 ImagePullBackOff 0 2d21h
onap dev-msb-eag-76d4b9b9d7-fqcff 0/2 Init:0/1 406 2d21h
onap dev-msb-iag-65c59cb86b-5ql7w 0/2 Init:0/1 405 2d21h
onap dev-sdnc-0 0/2 Init:1/3 405 2d21h
onap dev-sdnc-db-0 0/1 Init:ImagePullBackOff 0 2d21h
onap dev-sdnc-dmaap-listener-5c77848759-qfd8f 0/1 Init:1/2 405 2d21h
onap dev-sdnc-sdnrdb-init-job-5xjfv 0/1 Init:Error 0 2d21h
onap dev-sdnc-sdnrdb-init-job-drp5m 0/1 ImagePullBackOff 0 2d21h
onap dev-sdnrdb-coordinating-only-9b9956fc-dglbh 2/2 Running 0 2d21h
onap dev-sdnrdb-master-0 1/1 Running 0 2d21h
onap dev-sdnrdb-master-1 1/1 Running 0 2d21h
onap dev-sdnrdb-master-2 1/1 Running 0 2d21h
ricaux cherry-infra-kong-7d6555954f-b56v2 2/2 Running 19 2d20h
ricaux deployment-ricaux-ves-6c8db5db65-kfltw 0/1 ImagePullBackOff 0 2d20h
mhz
I am facing the same issue. Did you manage to resolve it?
Anonymous
Hi Team,
One of the deployment is not successful and also some pods are not coming up .. Kindly feedback us
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
cherry-infra 1 Fri Jun 18 23:01:28 2021 DEPLOYED infrastructure-3.0.0 1.0 ricaux
cherry-ves 1 Fri Jun 18 23:03:00 2021 DEPLOYED ves-1.1.1 1.0 ricaux
dev 1 Fri Jun 18 22:02:05 2021 DEPLOYED onap-6.0.0 El Alto onap
dev-consul 1 Fri Jun 18 22:02:06 2021 DEPLOYED consul-6.0.0 onap
dev-dmaap 1 Fri Jun 18 22:02:07 2021 DEPLOYED dmaap-6.0.0 onap
dev-mariadb-galera 1 Fri Jun 18 22:02:09 2021 DEPLOYED mariadb-galera-6.0.0 onap
dev-msb 1 Fri Jun 18 22:02:11 2021 DEPLOYED msb-6.0.0 onap
dev-sdnc 1 Fri Jun 18 22:02:13 2021 FAILED sdnc-6.0.0 onap
r2-dev-nonrtric 1 Fri Jun 18 22:54:37 2021 DEPLOYED nonrtric-2.0.0 nonrtric
root@ubuntu:~# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5d4dd4b4db-fbj8d 1/1 Running 2 12d
kube-system coredns-5d4dd4b4db-mwrld 1/1 Running 2 12d
kube-system etcd-ubuntu 1/1 Running 4 12d
kube-system kube-apiserver-ubuntu 1/1 Running 3 12d
kube-system kube-controller-manager-ubuntu 1/1 Running 2 12d
kube-system kube-flannel-ds-rpqjt 1/1 Running 2 12d
kube-system kube-proxy-dnkbw 1/1 Running 2 12d
kube-system kube-scheduler-ubuntu 1/1 Running 2 12d
kube-system tiller-deploy-7c54c6988b-x4tkm 1/1 Running 2 12d
nonrtric a1-sim-osc-0 1/1 Running 0 11d
nonrtric a1-sim-osc-1 1/1 Running 0 8d
nonrtric a1-sim-std-0 1/1 Running 0 11d
nonrtric a1-sim-std-1 1/1 Running 0 8d
nonrtric a1controller-64c4f59fb5-dbt4q 1/1 Running 0 11d
nonrtric controlpanel-5ff4849bf-fp5bm 1/1 Running 0 11d
nonrtric db-549ff9b4d5-rqbjp 1/1 Running 1 11d
nonrtric enrichmentservice-8578855744-hqxww 1/1 Running 0 11d
nonrtric policymanagementservice-77c59f6cfb-lwrzw 1/1 Running 0 11d
nonrtric rappcatalogueservice-5945c4d84b-2z875 1/1 Running 0 11d
onap dev-consul-68d576d55c-wdnz6 1/1 Running 1 11d
onap dev-consul-server-0 1/1 Running 1 11d
onap dev-consul-server-1 1/1 Running 1 11d
onap dev-consul-server-2 1/1 Running 1 11d
onap dev-kube2msb-9b5dd678d-7hcg8 1/1 Running 0 5d
onap dev-mariadb-galera-0 0/1 Init:CrashLoopBackOff 1391 4d22h
onap dev-message-router-0 0/1 Init:0/1 119 20h
onap dev-message-router-kafka-0 1/1 Running 0 4d21h
onap dev-message-router-kafka-1 1/1 Running 0 4d21h
onap dev-message-router-kafka-2 1/1 Running 0 4d21h
onap dev-message-router-zookeeper-0 1/1 Running 0 4d22h
onap dev-message-router-zookeeper-1 1/1 Running 0 4d22h
onap dev-message-router-zookeeper-2 1/1 Running 0 4d22h
onap dev-msb-consul-65b9697c8b-zdktr 1/1 Running 1 11d
onap dev-msb-discovery-5db774ff9b-mkqnw 2/2 Running 0 5d
onap dev-msb-eag-5b94b8d7d9-tptkf 2/2 Running 0 5d
onap dev-msb-iag-79fd98f784-9rhcx 2/2 Running 0 4d23h
onap dev-sdnc-0 0/2 Init:1/3 696 4d21h
onap dev-sdnc-db-0 0/1 Init:CrashLoopBackOff 1380 4d21h
onap dev-sdnc-dmaap-listener-57977f7b9-n6mvr 0/1 Init:1/2 696 4d21h
onap dev-sdnc-dmaap-listener-5c77848759-8fbmb 0/1 Init:1/2 694 4d21h
onap dev-sdnc-sdnrdb-init-job-5xjfv 0/1 Init:Error 0 11d
onap dev-sdnc-sdnrdb-init-job-7z86s 0/1 ImagePullBackOff 0 4d21h
onap dev-sdnrdb-coordinating-only-9b9956fc-dglbh 2/2 Running 2 11d
onap dev-sdnrdb-master-0 1/1 Running 1 11d
onap dev-sdnrdb-master-1 1/1 Running 1 11d
onap dev-sdnrdb-master-2 1/1 Running 1 11d
ricaux cherry-infra-kong-7d6555954f-b56v2 2/2 Running 23 11d
ricaux deployment-ricaux-ves-6c8db5db65-kfltw 1/1 Running 0 11d
santosh tendolkar
Has this issue been resolved? I am getting same error.
Anonymous
Hi Team,
The a1-sim-std2-0 in nonrtric namespace is not coming up.
The logs presents this message:
This pod is using the image "nexus3.o-ran-sc.org:10002/o-ran-sc/a1-simulator:2.0.0". Below the pod describe:
Please could anyone help here?
Regards.
Mark Geiszler
Hello Experts,
After running "./install initlocalrepo" there are 9 "kube-system" pods runnings, 26 "onap" pods running, 1 "onap" job completed, 3 "onap" jobs "Init:Error". Not sure if the error jobs matter, since there is 1 completed job?
In any case there are none of any other pods. The console output shows the following:
...
Packaging NONRTRIC component [nonrtricgateway]
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.
Saving 1 charts
Downloading nonrtric-common from repo http://127.0.0.1:8879/charts
Deleting outdated charts
Successfully packaged chart and saved it to: /tmp/nonrtricgateway-1.0.0.tgz
No requirements found in /root/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric/charts.
Error: unknown flag: --force-update
Finished Packaging NONRTRIC components [controlpanel a1controller a1simulator policymanagementservice enrichmentservice rappcatalogueservice nonrtricgateway]
Chart name-
namespace/nonrtric created
Install Kong- false
configmap/nonrtric-recipe created
Deploying NONRTRIC
helm install -f /root/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml --namespace nonrtric --name r3-dev-nonrtric /root/dep/smo/bin/smo-deploy/smo-dep/bin/../nonrtric/helm/nonrtric
Error: render error in "nonrtric/templates/pv2.yaml": template: nonrtric/templates/pv2.yaml:23:16: executing "nonrtric/templates/pv2.yaml" at <include "common.namespace.nonrtric" .>: error calling include: template: no template "common.namespace.nonrtric" associated with template "gotpl"
======> Waiting for NONRTRIC to reach operatoinal state
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
0/1 A1Controller pods, 0/1 ControlPanel pods,
0/1 DB pods, 0/1 PolicyManagementService pods,
and 0/4 A1Sim pods running
No resources found.
No resources found.
No resources found.
No resources found.
No resources found.
...
This waiting just loops forever. Please advise. Thank you
Berchris Requiao
This error is due to the missing requirements.yaml in the directory /root/dep/smo/bin/smo-deploy/smo-dep/bin/../nonrtric/helm/nonrtric:
Mark Geiszler
Concerning the Bronze install, I used the contents of this requirements.yaml above, however encountered the same error. What changes need to be made to the file /root/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml? Thanks
...
2290 Deploying NONRTRIC
2291 helm install -f /root/dep/smo/bin/smo-deploy/smo-dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml --namespace nonrtric --name r3-dev-nonrtric /root/dep/smo/bin/smo-deploy/smo-dep /bin/../nonrtric/helm/nonrtric
2292 Error: render error in "nonrtric/templates/pv2.yaml": template: nonrtric/templates/pv2.yaml:23:16: executing "nonrtric/templates/pv2.yaml" at <include "common.namespace.nonrtric " .>: error calling include: template: no template "common.namespace.nonrtric" associated with template "gotpl"
2293 ======> Waiting for NONRTRIC to reach operatoinal state
...
Berchris Requiao
Confirm you have the below line in your dep/smo/bin/smo-deploy/smo-dep/nonrtric/RECIPE_EXAMPLE/example_recipe.yaml:
Mark Geiszler
Above content confirmed for example_recipe.yaml file.
Any other suggestions of how to resolve the error mentioned above? Thanks
Error: render error in "nonrtric/templates/pv2.yaml": template: nonrtric/templates/pv2.yaml:23:16: executing "nonrtric/templates/pv2.yaml" at <include "common.namespace.nonrtric " .>: error calling include: template: no template "common.namespace.nonrtric" associated with template "gotpl"
Berchris Requiao
Are you executing the script from inside the directory dep/smo/bin/smo-deploy/smo-dep/bin and running this command?
./deploy-nonrtric -f ../RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml
Mark Geiszler
I ran the deploy-nonrtric command above as you suggested. Here is the last part of the output, where it errors + a new "already exists" error. Please advise? Thanks
...
Successfully packaged chart and saved it to: /tmp/nonrtricgateway-1.0.0.tgz
No requirements found in /root/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric/charts.
Error: unknown flag: --force-update
Finished Packaging NONRTRIC components [controlpanel a1controller a1simulator policymanagementservice enrichmentservice rappcatalogueservice nonrtricgateway]
Chart name-
Install Kong- false
Error from server (AlreadyExists): configmaps "nonrtric-recipe" already exists
Deploying NONRTRIC
helm install -f ../RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml --namespace nonrtric --name r3-dev-nonrtric /root/dep/smo/bin/smo-deploy/smo-dep/bin/../nonrtric/helm/nonrtric
Error: render error in "nonrtric/templates/pv2.yaml": template: nonrtric/templates/pv2.yaml:23:16: executing "nonrtric/templates/pv2.yaml" at <include "common.namespace.nonrtric" .>: error calling include: template: no template "common.namespace.nonrtric" associated with template "gotpl"
Berchris Requiao
The error is stating there's no requirements file did you see if there's a requirement.yaml in the directory /root/dep/smo/bin/smo-deploy/smo-dep/nonrtric/helm/nonrtric?
if not create the file with the following content:
Mark Geiszler
In the bronze dep/smo/bin/install script it appears that the following git repo is missing the requirements.yaml file in the norrtric directory above that you mentioned.
git clone http://gerrit.o-ran-sc.org/r/it/dep smo-dep
Berchris Requiao
It does missing it! I needed to created it to make it work for me.
Anonymous
How did you execute the ./install again? Because when I create this file and run ./install again I get this error: 0/7 SDNC-SDNR pods and 0/7 Message Router pods running. I have to remove the whole "dep" folder and re-run the whole proccess again in order to not see this error but the "smo-deploy" doesn't exist until I run the ./install command. Can you provide me more steps?
Thank you in advance
Anonymous
Hi,
Creating the requirement.yaml helped but there is a new error for me now.
"found in requirements.yaml, but missing in charts/ directory: nonrtric-common, a1controller, a1simulator, controlpanel, policymanagementservice, enrichmentservice, rappcatalogueservice, nonrtricgateway"
Any help will be appreciated!
Anonymous
There is actually a requirements.yaml file. So what do the differences really mean? Here is the contents:
dependencies:
- name: a1controller
version: ~2.0.0
repository: "@local"
- name: a1simulator
version: ~2.0.0
repository: "@local"
- name: controlpanel
version: ~2.0.0
repository: "@local"
- name: policymanagementservice
version: ~2.0.0
repository: "@local"
- name: nonrtric-common
version: ^2.0.0
repository: "@local"
Anonymous
So in this case check if you're using the recipe.yaml file
Anonymous
So in this case check if you're using the correct recipe.yaml file
Alexis Duque
I'm facing the same error than Junior Salemand Deepak Duttwith pod "cherry-infra-kong" trying to install Cherry release
Alexis Duque
I finally solved this error by replacing the docker image of the Ingress Controller in ~/dep/ric-aux/helm/infrastructure/subcharts/kong/values.yml
since Kong Ingress Controller docker image move from bintray to docker hub:
Anonymous
Hello everyone i have a problem:
++ eval 'kubectl get pods -n kube-system | grep "Running" | wc -l'
+++ kubectl get pods -n kube-system
+++ grep Running
+++ wc -l
The connection to the server localhost:8080 was refused - did you specify the right host or port?
+ NUMPODS=0
+ echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
+ '[' 0 -lt 8 ']'
+ sleep 5
this thing happened after i do this steps from comment:
After Step 1 and Before Step #2 Change the following files
dep/tools/k8s/etc/infra.rc
INFRA_DOCKER_VERSION="19.03.6"
INFRA_HELM_VERSION="2.17.0"
INFRA_K8S_VERSION="1.15.9"
INFRA_CNI_VERSION="0.7.5"
dep/tools/k8s/heat/scripts/k8s_vm_install.sh
.
.
.
.
.
.
.
elif [[ ${UBUNTU_RELEASE} == 18.* ]]; then
echo "Installing on Ubuntu $UBUNTU_RELEASE (Bionic Beaver)"
if [ ! -z "${DOCKERV}" ]; then
DOCKERVERSION="${DOCKERV}-0ubuntu1~18.04.3"
fi
.
.
dep/smo/bin/install
.
.
if [ "$1" == "initlocalrepo" ]; then
echo && echo "===> Initialize local Helm repo"
rm -rf ~/.helm #&& helm init -c # without this step helm serve may not work.
helm init --stable-repo-url=https://charts.helm.sh/stable --client-only
helm serve &
helm repo add local http://127.0.0.1:8879
fi
.
.
if anyone have solution it will be very helpful for me
Thanks
mhz
I am facing the same problem. Did you find a solution?
Anonymous
Me too
Aaron Chan
It's possible that either docker are not properly installed on package 19.03.6-0ubuntu1~18.04.3 or the docker services are running with errors
Kamil Kociszewski
Hi,
im wondering, should SMO and NR-RIC should be on separate VMs or it doesn't matter (or AUX VM is dedicated for that)?
apologizes for such a basic question.
GeunHoe KIM
Kamil.
Yes, I used seperate machine.
when I used same machine, faced some troubles.
I'm begginer too. good luck!
mhz
Hi all,
I have deployed SMO for the first time, the process was going well for quite a time, but suddenly got stuck in the step below, showing no action and no error.
[onap]
make[1]: Entering directory '/root/dep/smo/bin/smo-deploy/smo-oom/kubernetes'
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "local" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.
Saving 34 charts
Downloading aaf from repo http://127.0.0.1:8879
Downloading aai from repo http://127.0.0.1:8879
Downloading appc from repo http://127.0.0.1:8879
Downloading cassandra from repo http://127.0.0.1:8879
Downloading cds from repo http://127.0.0.1:8879
Downloading clamp from repo http://127.0.0.1:8879
Downloading cli from repo http://127.0.0.1:8879
Downloading common from repo http://127.0.0.1:8879
Downloading consul from repo http://127.0.0.1:8879
Downloading contrib from repo http://127.0.0.1:8879
Downloading dcaegen2 from repo http://127.0.0.1:8879
Downloading dcaemod from repo http://127.0.0.1:8879
Downloading dmaap from repo http://127.0.0.1:8879
Downloading esr from repo http://127.0.0.1:8879
Downloading log from repo http://127.0.0.1:8879
Downloading sniro-emulator from repo http://127.0.0.1:8879
Downloading mariadb-galera from repo http://127.0.0.1:8879
Downloading msb from repo http://127.0.0.1:8879
Downloading multicloud from repo http://127.0.0.1:8879
Downloading nbi from repo http://127.0.0.1:8879
Downloading pnda from repo http://127.0.0.1:8879
Downloading policy from repo http://127.0.0.1:8879
Downloading pomba from repo http://127.0.0.1:8879
Downloading portal from repo http://127.0.0.1:8879
Downloading oof from repo http://127.0.0.1:8879
Downloading robot from repo http://127.0.0.1:8879
Downloading sdc from repo http://127.0.0.1:8879
Downloading sdnc from repo http://127.0.0.1:8879
Downloading so from repo http://127.0.0.1:8879
Downloading uui from repo http://127.0.0.1:8879
Downloading vfc from repo http://127.0.0.1:8879
Downloading vid from repo http://127.0.0.1:8879
Downloading vnfsdk from repo http://127.0.0.1:8879
Downloading modeling from repo http://127.0.0.1:8879
Deleting outdated charts
And nothing is happening here. What could be the problem?
Thanks
Aleksey Suev
You just need to add extra param (SKIP_LINT=TRUE) to the make cmd in 'install' shell script:
make -e SKIP_LINT=TRUE
It will skip helm linting stage and allow to continue script execution.
Aaron Chan
Hi,
Does anyone know CNI configuration which is required for this exercise as my kubelet services are not started probably ? Any help is much appreciated thanks.
Run ...
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
$ systemctl status kubelet
kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Tue 2021-08-24 01:10:22 EDT; 2h 29min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 10034 (kubelet)
Tasks: 21 (limit: 1112)
CGroup: /system.slice/kubelet.service
└─10034 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup
Aug 24 03:39:36 aaronchu-ubuntu1804-libvirt2 kubelet[10034]: E0824 03:39:36.824206 10034 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNo
Aug 24 03:39:40 aaronchu-ubuntu1804-libvirt2 kubelet[10034]: W0824 03:39:40.667054 10034 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Aug 24 03:39:41 aaronchu-ubuntu1804-libvirt2 kubelet[10034]: E0824 03:39:41.838700 10034 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNo
Aug 24 03:39:42 aaronchu-ubuntu1804-libvirt2 kubelet[10034]: E0824 03:39:42.096513 10034 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied na
Aug 24 03:39:45 aaronchu-ubuntu1804-libvirt2 kubelet[10034]: W0824 03:39:45.667522 10034 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
mhz
Hi all,
One of the pods is not running.
--------------
ricaux bronze-infra-kong-68657d8dfd-4n54w 1/2 ImagePullBackOff 3 46h
ricaux deployment-ricaux-ves-65db844758-p2rhp 1/1 Running 2 46h
------------
I tried the kubectl describe pod and it shows the following:
--------------
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 19m (x257 over 78m) kubelet, mhz-virtualbox Back-off pulling image "kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.7.0"
Warning Unhealthy 14m kubelet, mhz-virtualbox Liveness probe failed: Get http://10.244.0.114:9542/status: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 8m47s (x290 over 78m) kubelet, mhz-virtualbox Error: ImagePullBackOff
Normal SandboxChanged 80s kubelet, mhz-virtualbox Pod sandbox changed, it will be killed and re-created.
Normal Created 72s kubelet, mhz-virtualbox Created container proxy
Normal Pulled 72s kubelet, mhz-virtualbox Container image "kong:1.4" already present on machine
Normal Started 68s kubelet, mhz-virtualbox Started container proxy
Warning Unhealthy 58s kubelet, mhz-virtualbox Readiness probe failed: Get http://10.244.0.137:9542/status: dial tcp 10.244.0.137:9542: connect: connection refused
Warning Unhealthy 54s kubelet, mhz-virtualbox Liveness probe failed: Get http://10.244.0.137:9542/status: dial tcp 10.244.0.137:9542: connect: connection refused
Warning Unhealthy 43s kubelet, mhz-virtualbox Readiness probe failed: Get http://10.244.0.137:9542/status: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Warning Failed 18s (x3 over 72s) kubelet, mhz-virtualbox Error: ErrImagePull
Warning Failed 18s (x3 over 72s) kubelet, mhz-virtualbox Failed to pull image "kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.7.0": rpc error: code = Unknown desc = Error response from daemon: error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>403 Forbidden</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>403 Forbidden</h1></center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"
How do you think I can troubleshoot the ImagePullBackOff image?
Thank you
PRIYADHARSHINI G S
Hi,
To install SMO cherry release we make use of the below command
How dawn release of SMO can be installed?
Is there any other repository for the installation?
Thank you
Anonymous
Did you find dawn version?
HARRY ZHANG
I am a new comer. After ran,
./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh
I have noticed that the script seems not trying to install docker. By the time it needs docker, the system throws error. Could somebody tell me if I need to manually install docker beforehand?
Thanks,
Harry
Aleksey Suev
You don't need to manually pre-install Docker. As a temporary workaround, you can leave the INFRA_DOCKER_VERSION parameter in infra.rc blank. This will allow you to pass through this step successfully.
HARRY ZHANG
Hi Aleksey,
Yes, you are right. It seems the docker version caused the docker not installed. After leave it blank, docker is installed, but the Kubernetes seems not create a networking container such as flannel so that the coredns containers failed to come up and Kubernetes only launched 7 containers instead of 8.
Regards,
Harry
HARRY ZHANG
Hi Aleksey,
The Flannel failed to launch is because my local firewall blocked the connection. Thank you for your help.
Regards,
Harry
HARRY ZHANG
Hi Everybody,
I am installing bronze. It stuck at,
0/1 A1Controller pods, 1/1 ControlPanel pods,
1/1 DB pods, 1/1 PolicyManagementService pods,
and 0/4 A1Sim pods running
Could anybody please help?
Thanks,
Anonymous
did you solve the problem, bro?
e.churikov
Anonymous
Not, yet. I have been busy on other things. How about you. Have you installed it?
Anonymous
I'm trying to find a solution... So far, everything is sad hehe.
Stable:
"0/1 A1Controller pods, 1/1 ControlPanel pods,
1/1 DB pods, 1/1 PolicyManagementService pods,
and 0/4 A1Sim pods running"
=(
e.churikov
Federico Rossi
Please provide output of:
kubectl describe pod [a1controllerpod]
Same for A1 sim pods, thanks
Eduard Churikov
$ kubectl describe pod a1controllerpod
Error from server (NotFound): pods "a1controllerpod" not found
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5d4dd4b4db-tmq74 1/1 Running 1 94m
kube-system coredns-5d4dd4b4db-w7lmg 1/1 Running 1 94m
kube-system etcd-edikpc 1/1 Running 1 93m
kube-system kube-apiserver-edikpc 1/1 Running 1 93m
kube-system kube-controller-manager-edikpc 1/1 Running 1 93m
kube-system kube-flannel-ds-bjqpw 1/1 Running 1 94m
kube-system kube-proxy-fzkxl 1/1 Running 1 94m
kube-system kube-scheduler-edikpc 1/1 Running 1 93m
kube-system tiller-deploy-7c54c6988b-fl9fj 1/1 Running 1 92m
nonrtric controlpanel-7dc756b959-mcpc8 1/1 Running 0 4m39s
nonrtric enrichmentservice-0 1/1 Running 0 4m39s
nonrtric nonrtricgateway-7bcf7fdbf9-zsplx 1/1 Running 0 4m39s
nonrtric policymanagementservice-0 1/1 Running 0 4m39s
onap dev-consul-68d576d55c-kxvsw 1/1 Running 0 12m
onap dev-consul-server-0 1/1 Running 0 12m
onap dev-consul-server-1 1/1 Running 0 11m
onap dev-consul-server-2 1/1 Running 0 11m
onap dev-kube2msb-9fc58c48-zsfr6 1/1 Running 0 12m
onap dev-mariadb-galera-0 1/1 Running 0 12m
onap dev-mariadb-galera-1 1/1 Running 0 11m
onap dev-mariadb-galera-2 1/1 Running 0 10m
onap dev-message-router-0 1/1 Running 0 12m
onap dev-message-router-kafka-0 1/1 Running 1 12m
onap dev-message-router-kafka-1 1/1 Running 1 12m
onap dev-message-router-kafka-2 1/1 Running 1 12m
onap dev-message-router-zookeeper-0 1/1 Running 0 12m
onap dev-message-router-zookeeper-1 1/1 Running 0 12m
onap dev-message-router-zookeeper-2 1/1 Running 0 12m
onap dev-msb-consul-65b9697c8b-s4pkj 1/1 Running 0 12m
onap dev-msb-discovery-54b76c4898-ps7qh 2/2 Running 0 12m
onap dev-msb-eag-76d4b9b9d7-g7c69 2/2 Running 0 12m
onap dev-msb-iag-65c59cb86b-x2phm 2/2 Running 0 12m
onap dev-sdnc-0 2/2 Running 0 12m
onap dev-sdnc-db-0 1/1 Running 0 12m
onap dev-sdnc-dmaap-listener-5c77848759-5jqdf 1/1 Running 0 12m
onap dev-sdnc-sdnrdb-init-job-6rctk 0/1 Completed 0 12m
onap dev-sdnrdb-coordinating-only-9b9956fc-s7wsp 2/2 Running 0 12m
onap dev-sdnrdb-master-0 1/1 Running 0 12m
onap dev-sdnrdb-master-1 1/1 Running 0 11m
onap dev-sdnrdb-master-2 1/1 Running 0 10m
sorry if I misunderstood
Eduard Churikov
I was able to solve the problem!
need example_recipe put true in A1controller & A1simulator.
thanks Alexey Suev for the tip!
Bill Zalokostas
a1controller:
a1controller:
imagePullPolicy: Always
image:
registry: 'nexus3.onap.org:10002/onap'
name: sdnc-image
tag: 2.1.6
replicaCount: 1
service:
allowHttp: true
httpName: http
internalPort1: 8282
targetPort1: 8181
httpsName: https
internalPort2: 8383
targetPort2: 8443
liveness:
initialDelaySeconds: 300
periodSeconds: 10
readiness:
initialDelaySeconds: 60
periodSeconds: 10
a1simulator:
a1simulator:
name: a1-sim
imagePullPolicy: Always
image:
registry: 'nexus3.o-ran-sc.org:10002/o-ran-sc'
name: a1-simulator
tag: 2.2.0
service:
allowHttp: true
httpName: http
internalPort1: 8085
targetPort1: 8085
httpsName: https
internalPort2: 8185
targetPort2: 8185
liveness:
initialDelaySeconds: 20
periodSeconds: 10
readiness:
initialDelaySeconds: 20
periodSeconds: 10
oscVersion:
name: a1-sim-osc
replicaCount: 2
stdVersion:
name: a1-sim-std
replicaCount: 2
stdVersion2:
name: a1-sim-std2
replicaCount: 2
Bill Zalokostas
What does it mean to put true to A1controller & A1simulator?
Kamil Kociszewski
Dear all,
i'm looking for solution to change default routes for Control Panel.
in my case, control panel try to use remote address (vpn) instead of VM/Container address.
"Http failure response for http://10.254.185.67:30091/a1-policy/v2/policy-types: 502 Bad Gateway"
→ i guess that there should be one of this ip
root@smo:~/dep/bin# kubectl get services -n ricaux
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cherry-infra-kong-proxy NodePort 10.108.153.144 <none> 80:32080/TCP,443:32443/TCP 2d22h
ric-entry ClusterIP 10.102.52.174 <none> 80/TCP,443/TCP 2d22h
service-ricaux-ves-http ClusterIP 10.98.92.60 <none> 8080/TCP,8443/TCP 2d22h
Federico Rossi
On the installation recipe you can control the ingress IPs:
# ricip should be the ingress controller listening IP for the platform cluster
# auxip should be the ingress controller listening IP for the AUX cluster
extsvcaux:
ricip: "10.0.0.1"
auxip: "10.0.0.1"
Kamil Kociszewski
Federico Rossi
i think that i don't explain my problem deep enough, sorry.
The problem is with the SMO policy Control Panel webpage.
im connecting to remote server with RIC/SMO with SSH with floating vm ip.
on webrowser host, i use floating ip and dedicated port address to access Control Panel : 10.254.185.67:30091
but Control Panel should use local server addreesses, 10.0.2.101 or kong address (that i'm not sure), to get policy data
instead of using the one i using to conenct through browser.
"Http failure response for http://10.254.185.67:30091/a1-policy/v2/policy-types: 502 Bad Gateway" - error returned by Control Panel webpage.
so, to sum up, Control Panel webpage try to use floating vm address to get policy data instead of local server adresses and the result is fail.
Federico Rossi
It's still not fully clear to me but maybe this helps. 30091 map to 8080 that is the nginx running on the controlpanel.
If you look at the configuration: kubectl get cm controlpanel-configmap -n nonrtric -o yaml
Backend is actually forwarding the requests to the nonrtgateway on port 9090. So control panel is not using the kong ingress in this case.
Now if you look at the nonrtgateway config oc get configmap nonrtricgateway-configmap -n nonrtric -o yaml
You can see how the requests are routed:
Your request http://10.254.185.67:30091/a1-policy/v2/policy-types is actually calling the policymanagentservice you can try and query directly the policymanagement and see if it works. Maybe the 500 error comes from there.
Not the answer you were looking for but should give you a little more details. You can change the configmaps and restart the pods to apply a different config.
Francisco Javier Curieses Sanz
Many thanks Federico.
It works for me! I´m using E release.
user-60c62
Hi,
I am trying to install smo on VM with ubuntu 18.04.6 & 4 CPU:-
As per the comment of Javi G and Chirag Gandhi , I have updated my k8s...file to have:-
INFRA_DOCKER_VERSION=""
INFRA_K8S_VERSION="1.15.9"
INFRA_CNI_VERSION="0.7.5"
INFRA_HELM_VERSION="2.17.0"
But while running the command "./k8s..."
Up to 7/8 pods completed starting from 0.
Error:-
+wget https://storage.googleapis.com/kubernetes-helm/helm-v2.17.0-linux-amd64.tar.gz
Resolving storage.google.apis.com
Connecting to storage.googleapis.com
HTTP request sent, awaiting response ... 403 Forbidden
+cd /root
+rm -rf Helm
+mkdir Helm
+cd Helm
tar: ../helm-v2.17.0-linux-amd64.tar.gz: Cannot Open: No such file or directory
tar: Error is not recoverable: exiting now
.
.
.
Failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden
I am new to this platform. I request you kindly suggest to me a fix for these issues.
Francisco Javier Curieses Sanz
You can try with this:
INFRA_DOCKER_VERSION="18.09.7"
INFRA_K8S_VERSION="1.15.9"
INFRA_CNI_VERSION="0.7.5"
INFRA_HELM_VERSION="2.16.7"
Its work for me on VM with Ubuntu 18.04.6 & 4 CPU.
Anonymous
Hii guys,
In step 3 I get this error.....
+ kubectl get pods --all-namespaces
The connection to the server localhost:8080 was refused - did you specify the right host or port?+ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlunable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refusedunable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refusedunable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refusedunable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refusedunable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refusedunable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused+ wait_for_pods_running 8 kube-system+ NS=kube-system+ CMD='kubectl get pods --all-namespaces '+ '[' kube-system '!=' all-namespaces ']'+ CMD='kubectl get pods -n kube-system '+ KEYWORD=Running+ '[' 2 == 3 ']'+ CMD2='kubectl get pods -n kube-system | grep "Running" | wc -l'++ eval 'kubectl get pods -n kube-system | grep "Running" | wc -l'+++ kubectl get pods -n kube-system+++ grep Running+++ wc -lThe connect
This part below keeps looping continuosly
The connection to the server localhost:8080 was refused - did you specify the right host or port?
+ NUMPODS=0
+ echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
+ '[' 0 -lt 8 ']'
+ sleep 5
++ eval 'kubectl get pods -n kube-system | grep "Running" | wc -l'
+++ kubectl get pods -n kube-system
+++ grep Running
+++ wc -l
Any ideas...I have tried editing the infra.rc file based on comments but it isnt working
allan martins
You might want to change docker version to "" and k8 to 1.16.0 (I believe the one present there is 1.15.9). I'll also have to do some modificions on all th files in the dep folder (and subfolders) to fix help versions (they are not fixed by the clould_init.sh script).
Tomas J. Cantador
Hello.
I am trying to make the SMO installation in a virtual machine but I cannot do because it appears the following message: "Unsupported kubernetes version required".
I installed kubernetes1.17.2 and uninstall kubernetes1.23.4, but it reinstalled 1.23.4 and I could not finish the k8s script. Do you know why? Thank you (I installed helm2.17.0)
Francisco Javier Curieses Sanz
At what step of the installation?
Have you modified the infra.rc file (dep/tools/k8s/etc/infra.rc)?
Garazi Aranburu
Hello,
I am almost finishing the SMO Installation, but I am facing up with this problem. Can somebody figure it out?
root@oran-smo:~/dep/smo/bin# ./install
===> Starting at Fri Mar 4 11:42:18 UTC 2022
===> Cleaning up any previous deployment
======> Deleting all Helm deployments
Error: command 'delete' requires a release name
======> Clearing out all ONAP deployment resources
Error from server (NotFound): namespaces "onap" not found
No resources found
error: resource(s) were provided, but no name, label selector, or --all flag specified
error: resource(s) were provided, but no name, label selector, or --all flag specified
======> Clearing out all RICAUX deployment resources
Error from server (NotFound): namespaces "ricaux" not found
Error from server (NotFound): namespaces "ricinfra" not found
======> Clearing out all NONRTRIC deployment resources
Error from server (NotFound): namespaces "nonrtric" not found
======> Preparing for redeployment
node/oran-smo not labeled
node/oran-smo not labeled
node/oran-smo not labeled
======> Preparing working directory
===> Deploying OAM (ONAP Lite)
======> Deploying ONAP-lite
fetching local/onap
release "dev" deployed
release "dev-consul" deployed
release "dev-dmaap" deployed
release "dev-mariadb-galera" deployed
release "dev-msb" deployed
release "dev-sdnc" deployed
======> Waiting for ONAP-lite to reach operatoinal state
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
Roy Zhu
Hello,
Can anyone point me the guidance of how to deploy SMO in E-release? This page seems outdated.
Thanks.
Roy Zhu
I'll reply the question myself. Probably someone already knew it, but I still put here as reference for new comers. Any comments are welcome.
Below is the notes list for SMO quick deployment in E-release,
Follow steps in dep/smo-install/README.md in Quick Installation section. After quite a while, the following output appears
Below is the pods list,
And the services exposed,
Xi Yang
Thanks for the up to date instructions! It worked. My only problem is that the default `admin:
Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U` credential does not work for the ONAP ODLUX portal.
In the SDNC container, I do see these are the correct username / password though.
bash-5.0$ env |grep ADMIN
ODL_ADMIN_PASSWORD=Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U
ODL_ADMIN_USERNAME=admin
Xi Yang
Looks like pod onap-aaf-sms-preload–1 failed to load secret onap-cps-core-app-user-creds. A simple fix:
echo -n 'admin' > ./login.txt
echo -n 'password' > ./password.txt
kubectl create secret -n onap generic onap-cps-core-app-user-creds \
--from-file=login=./login.txt \
--from-file=password=./password.txt
Then restart onap-aaf-sms-preload–1, onap-aaf-sms, onap-sdnc-0 and onap-sdnc-web pods.
Anonymous
Hi experts,
I am trying out the SMO installation and I am running into possibly an infinite loop in step 3.
I must mention that I separately installed docker and left it blank in the infra.rc file since I was getting the ,
Anonymous
Hi, i have the same problem.
I tried the solution with
INFRA_DOCKER_VERSION=""
INFRA_K8S_VERSION="1.15.9"
INFRA_CNI_VERSION="0.7.5"
INFRA_HELM_VERSION="2.17.0"
But nothing happened, always the same problem
Anonymous
You need to install specific version of docker before starting smo installation. follow instructions in Install Docker Engine on Ubuntu | Docker Documentation.
santosh tendolkar
Hi All ,
I am trying to install cherry release of SMO.I managed to successfully complete first 3 steps of installation. I am having below issue with 4th step. step remains in loop forever. I have checked the output for POD state in onap name space. Some pods remain init state and some frequently crash. Find below out put pod state. Can anyone guide on how to resolve below issue?
NAME READY STATUS RESTARTS AGE
dev-consul-68d576d55c-qspwq 1/1 Running 0 8h
dev-consul-server-0 1/1 Running 0 8h
dev-consul-server-1 1/1 Running 0 8h
dev-consul-server-2 1/1 Running 0 8h
dev-kube2msb-9b5dd678d-f5fg9 0/1 CrashLoopBackOff 95 8h
dev-mariadb-galera-0 0/1 CrashLoopBackOff 123 8h
dev-message-router-0 0/1 Init:0/1 48 8h
dev-message-router-kafka-0 0/1 CrashLoopBackOff 87 8h
dev-message-router-kafka-1 0/1 CrashLoopBackOff 87 8h
dev-message-router-kafka-2 0/1 CrashLoopBackOff 87 8h
dev-message-router-zookeeper-0 1/1 Running 0 8h
dev-message-router-zookeeper-1 1/1 Running 0 8h
dev-message-router-zookeeper-2 1/1 Running 0 8h
dev-msb-consul-77f4d5fd44-qfp8r 1/1 Running 0 8h
dev-msb-discovery-5db774ff9b-pslg7 2/2 Running 0 8h
dev-msb-eag-5b94b8d7d9-tlvvj 2/2 Running 0 8h
dev-msb-iag-79fd98f784-qnc8d 2/2 Running 0 8h
dev-sdnc-0 2/2 Running 0 8h
dev-sdnc-db-0 0/1 Running 127 8h
dev-sdnc-dmaap-listener-57977f7b9-62szx 0/1 Init:1/2 48 8h
dev-sdnc-sdnrdb-init-job-vk8m8 0/1 Completed 0 8h
dev-sdnrdb-coordinating-only-587798dccf-l652l 2/2 Running 0 8h
dev-sdnrdb-master-0 1/1 Running 0 8h
dev-sdnrdb-master-1 1/1 Running 0 8h
dev-sdnrdb-master-2 1/1 Running 0 8h
Kubectl describe details for dev-mariadb-Galera-0
kubectl describe pods dev-mariadb-galera-0 -n onap
Name: dev-mariadb-galera-0
Namespace: onap
Priority: 0
Node: oran-smo/172.20.0.4
Start Time: Wed, 17 Aug 2022 19:35:30 +0000
Labels: app=dev-mariadb-galera
chart=mariadb-galera-6.0.0
controller-revision-hash=dev-mariadb-galera-7b9574778c
heritage=Tiller
release=dev
statefulset.kubernetes.io/pod-name=dev-mariadb-galera-0
Annotations: pod.alpha.kubernetes.io/initialized: true
Status: Running
IP: 10.244.0.237
Controlled By: StatefulSet/dev-mariadb-galera
Init Containers:
mariadb-galera-prepare:
Container ID: docker://ab472c0d8ee7479f36d7b22bfdcee717d8b4b225557bab5b2c8fbbdc02c3593d
Image: nexus3.onap.org:10001/busybox
Image ID: docker-pullable://busybox@sha256:ef320ff10026a50cf5f0213d35537ce0041ac1d96e9b7800bafd8bc9eff6c693
Port: <none>
Host Port: <none>
Command:
sh
-c
chown -R 27:27 /var/lib/mysql
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 17 Aug 2022 19:35:56 +0000
Finished: Wed, 17 Aug 2022 19:35:56 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/lib/mysql from dev-mariadb-galera-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wk42p (ro)
Containers:
mariadb-galera:
Container ID: docker://16f9b922558b30a34f2ee4700379a2b16a9945a043daa44f1d1699cf5ca9dc74
Image: nexus3.onap.org:10001/adfinissygroup/k8s-mariadb-galera-centos:v002
Image ID: docker-pullable://nexus3.onap.org:10001/adfinissygroup/k8s-mariadb-galera-centos@sha256:fbcb842f30065ae94532cb1af9bb03cc6e2acaaf896d87d0ec38da7dd09a3dde
Ports: 3306/TCP, 4444/TCP, 4567/TCP, 4568/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Thu, 18 Aug 2022 03:43:09 +0000
Finished: Thu, 18 Aug 2022 03:44:38 +0000
Ready: False
Restart Count: 125
Liveness: exec [mysqladmin ping] delay=30s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [/usr/share/container-scripts/mysql/readiness-probe.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
Environment:
POD_NAMESPACE: onap (v1:metadata.namespace)
MYSQL_USER: <set to the key 'login' in secret 'dev-mariadb-galera-db-user-credentials'> Optional: false
MYSQL_PASSWORD: <set to the key 'password' in secret 'dev-mariadb-galera-db-user-credentials'> Optional: false
MYSQL_DATABASE:
MYSQL_ROOT_PASSWORD: <set to the key 'password' in secret 'dev-mariadb-galera-db-root-password'> Optional: false
Mounts:
/etc/localtime from localtime (ro)
/usr/share/container-scripts/mysql/configure-mysql.sh from init-script (rw,path="configure-mysql.sh")
/var/lib/mysql from dev-mariadb-galera-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wk42p (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
dev-mariadb-galera-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: dev-mariadb-galera-data-dev-mariadb-galera-0
ReadOnly: false
init-script:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dev-mariadb-galera
Optional: false
localtime:
Type: HostPath (bare host directory volume)
Path: /etc/localtime
HostPathType:
default-token-wk42p:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wk42p
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 37m (x896 over 8h) kubelet, oran-smo Readiness probe failed: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2 "No such file or directory")
Warning Unhealthy 13m (x372 over 8h) kubelet, oran-smo Liveness probe failed: mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2 "No such file or directory")'
Check that mysqld is running and that the socket: '/var/lib/mysql/mysql.sock' exists!
Warning BackOff 3m1s (x1529 over 8h) kubelet, oran-smo Back-off restarting failed container
kubectl describe pods dev-kube2msb-9b5dd678d-f5fg9 -n onap
Name: dev-kube2msb-9b5dd678d-f5fg9
Namespace: onap
Priority: 0
Node: oran-smo/172.20.0.4
Start Time: Wed, 17 Aug 2022 19:35:29 +0000
Labels: app=kube2msb
pod-template-hash=9b5dd678d
release=dev
Annotations: sidecar.istio.io/inject: true
Status: Running
IP: 10.244.0.227
Controlled By: ReplicaSet/dev-kube2msb-9b5dd678d
Init Containers:
kube2msb-readiness:
Container ID: docker://2ebc7f96179b882fc317dd5e2d875a7b5067e1ecb8bbcd2f3f2af66ba8278715
Image: oomk8s/readiness-check:2.2.1
Image ID: docker-pullable://oomk8s/readiness-check@sha256:dc4d07abf8d936823f1968698f3ac9b9d719e82271fd8243aa142a5f1b92a833
Port: <none>
Host Port: <none>
Command:
/root/ready.py
Args:
--container-name
msb-discovery
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 17 Aug 2022 19:35:48 +0000
Finished: Wed, 17 Aug 2022 19:36:36 +0000
Ready: True
Restart Count: 0
Environment:
NAMESPACE: onap (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from msb-token-6wfxl (ro)
Containers:
kube2msb:
Container ID: docker://703758cb122b4c2b466503d4f285374b08b1359ad5a2657f0fd0d6f76714f553
Image: nexus3.onap.org:10001/onap/oom/kube2msb:1.2.6
Image ID: docker-pullable://nexus3.onap.org:10001/onap/oom/kube2msb@sha256:c0f690dbe43d38a3663b1f54fabd9bf75933235b4640ed3ef8688cea4461e351
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 18 Aug 2022 03:47:25 +0000
Finished: Thu, 18 Aug 2022 03:47:40 +0000
Ready: False
Restart Count: 97
Environment:
KUBE_MASTER_URL: https://kubernetes.default:443
MSB_URL: http://msb-discovery.onap:10081
Mounts:
/etc/localtime from localtime (ro)
/var/run/secrets/kubernetes.io/serviceaccount from msb-token-6wfxl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
localtime:
Type: HostPath (bare host directory volume)
Path: /etc/localtime
HostPathType:
msb-token-6wfxl:
Type: Secret (a volume populated by a Secret)
SecretName: msb-token-6wfxl
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 31m (x93 over 8h) kubelet, oran-smo Container image "nexus3.onap.org:10001/onap/oom/kube2msb:1.2.6" already present on machine
Warning BackOff 75s (x2189 over 8h) kubelet, oran-smo Back-off restarting failed container
Anonymous
Hi while doing get the same issue.
Step 4: Installation of Kubernetes, Helm, Docker, etc.
Run ...
$ ./k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh-
Here received like-
Waiting for 5/8 pods running in namespace.
And only 5 pods were created and then still show running. Anyone can support for same.
br
ajeet
Anonymous
Hi all appreciate and thanks to all for such nice to complete deployment process .
I am geeting issue here related to>>>
Checking Container Health
Check the health of the application manager platform component by querying it via the ingress controller using the following command.
Mahesh Jethanandani
Hi Ajeet,
It is not clear to me why a RIC platform issue is being filed under SMO installation. Can you explain what is it exactly that you are trying to do.
Cheers.
Anonymous
Hi, Mahesh thanks,
I am sorry may wrongly be posted here. Thanks for intimating me, anyway my RIC issue is resolved.
br
ajeet
Anonymous
Dears, I there solution?
NUMPODS=5
+ echo '> waiting for 5/8 pods running in namespace [kube-system] with keyword [Running]'
> waiting for 5/8 pods running in namespace [kube-system] with keyword [Running]
+ '[' 5 -lt 8 ']'
+ sleep 5
^Z
Mahesh Jethanandani
Context helps. Could you point to what exactly are you trying to do?
Francisco Javier Curieses Sanz
Hi,
I had the same problem, core-dns pods were always in status "pending". It's necessary to deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other.
Sometimes (I don't know why), these pods don´t create correctly, and you need to force them.
To solve this you can execute, while the others pods are deploying:
kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
Please let me know if this works for you.
BR,
Fran
Mahesh Jethanandani
Need to remind folks to check where the issue is being discussed. This page is dedicated to SMO installation, and this does not seem to be related to SMO installation.
santosh tendolkar
Mahesh Jethanandani Earlier there used to be consolidated step by step guide for SMO installation (authored by farheen Cefalu) ,till bronze release. That link is no longer available. Could please share link for Kubernetes cluster based installation for SMO and Gerrit links for software downloads?
Regards
Santosh Tendolkar
Mahesh Jethanandani
Hi santhosh kumar, that process is no longer valid. SMO is now three different projects. Depending on what you are trying to do, you can lookup docs.o-ran-sc.org for installation instruction.