- Created by Lusheng Ji, last modified on Jun 22, 2020
VM Minimum Requirements for RIC 22 |
---|
NOTE: sudo access is required for installation Getting Started PDF |
Step 1: Obtaining the Deployment Scripts and Charts |
---|
NOTE: cd to your directory where installation will be, i.e. /home/user Run ... $ sudo -i |
Step 2: Generation of cloud-init Script |
---|
This step will generate a script that will set up a one-node Kubernetes cluster for installing the SMO components. The resultant script can run as the cloud-init script when launching the VM, or run at a root shell after the VM is launched. Note, because the O-RAN SC Bronze SMO consists components from ONAP Frankfurt release, the infrastructure software stack (Docker, Kubernetes, and Helm) versions are as required by the ONAP deployment. It is a different combination compared to what the Near RT RIC requires. Run ... $ cd tools/k8s/etc |
Step 3: Installation of Kubernetes, Helm, Docker, etc. |
---|
Run ... $ ./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh NOTE: Be patient as this takes some time to complete. Upon completion of this script, the VM will be rebooted. You will then need to login once again. $ sudo -i |
Step 4: Deploy SMO |
---|
O-RAN Bronze SMO consists of components from three separate groups: ONAP, Non RT RIC, and RIC-AUX. It is deployed by running a script under smo/bin. Run ... $ cd smo/bin |
Helpful Hints |
---|
Kubectl commads: kubectl get pods -n nampespace - gets a list of Pods running kubectl get logs -n namespace name_of_running_pod |
Complete these tasks to get started
Recent space activity
-
-
Running A1 and O1 Use Case Flows commented Mar 05, 2021
-
Hello World xApp Use Case Flows commented Mar 04, 2021
-
Running A1 and O1 Use Case Flows commented Feb 27, 2021
-
SMO Installation commented Feb 26, 2021
Anonymous -
-
-
Hello World xApp Use Case Flows commented Feb 25, 2021
-
Space contributors
- No labels
83 Comments
Zhengwei Gao
hello experts,
after onap deployment, i have a pod not ready but completed, is it normal? thanks
dev-sdnc-sdnrdb-init-job-6v525 0/1 Completed 0 18m 10.244.0.159 smobronze <none> <none>
and is it possible to set the pullPolicy to IfNotPresent for all the charts of onap?
Lusheng Ji
That is okay. You can see that it is actually a job, not a regular pod. You may view its status by:
or even:
Anonymous
Hi Lusheng,
thanks for quick reply. Now I have smo fully deployed. thanks
(10:57 dabs@smobronze bin) > sudo kubectl get jobs --all-namespaces
NAMESPACE NAME COMPLETIONS DURATION AGE
onap dev-sdnc-sdnrdb-init-job 1/1 34m 74m
Anonymous
Hi. I am new to O-RAN and trying to deploy SMO as well as RIC. I was successfully able to install SMO with the given steps but while deploying I keep on getting below messages:-
===> Deploying OAM (ONAP Lite)
======> Deploying ONAP-lite
Error: unknown command "deploy" for "helm"
Run 'helm --help' for usage.
======> Waiting for ONAP-lite to reach operatoinal state
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
None of these pods are getting up and running for like hours, infact none of them are even present. Did I miss any step or do i need to install any component of ONAP before this?
Zhengwei Gao
I had once encountered this problem:
make sure deploy/undeploy plugins exist in /root/.helm:
dabs@RICBronze:~/oran/dep/smo/bin/smo-deploy/smo-oom$sudo cp -R ./kubernetes/helm/plugins/ /root/.helm
Chirag Gandhi
Thanks for prompt response.
Even after copying the plugins, I am getting same messages. But I am getting initial error message as :-
===> Deploying OAM (ONAP Lite)
======> Deploying ONAP-lite
fetching local/onap
Error: chart "onap" matching version "" not found in local index. (try 'helm repo update'). no chart name found
mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/charts/*': No such file or directory
mv: cannot stat '/root/.helm/plugins/deploy/cache/onap-subcharts/common': No such file or directory
rm: cannot remove '/root/.helm/plugins/deploy/cache/onap/requirements.lock': No such file or directory
mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/requirements.yaml': No such file or directory
Error: could not find a ready tiller pod
release "dev" deployed
Error: could not find a ready tiller pod
Error: could not find a ready tiller pod
======> Waiting for ONAP-lite to reach operatoinal state
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
Seems that the onap charts are not present. I even tried "helm repo update" but no luck. Can you share how to get them?
Lusheng Ji
Chirag,
Did the step 1 through 3 complete without error for your installation? It appears that your k8s cluster is not running correctly (i.e. error msg complaining about tiller pod not in ready state).
Lusheng
Zhengwei Gao
Your tiller pod is not ready.
1st, make sure the tiller pod is in running state
2nd, make sure helm service is in running state:
sudo helm serve
3rd, make sure 'helm search onap' has 35 charts listed. If not, you can manually issue the 'make all' command:
dabs@smobronze:~/oran/dep/smo/bin/smo-deploy/smo-oom/kubernetes$ sudo make all
Anonymous
Zhengwei,
I executed the instruction, sudo make all,but I got the following error.
Downloading common from repo http://127.0.0.1:8879/charts
Save error occurred: could not download http://127.0.0.1:8879/common-6.0.0.tgz: Get http://127.0.0.1:8879/common-6.0.0.tgz: dial tcp 127.0.0.1:8879: connect: connection refused
Deleting newly downloaded charts, restoring pre-update state
Error: could not download http://127.0.0.1:8879/common-6.0.0.tgz: Get http://127.0.0.1:8879/common-6.0.0.tgz: dial tcp 127.0.0.1:8879: connect: connection refused
Makefile:61: recipe for target 'dep-robot' failed
make[1]: *** [dep-robot] Error 1
make[1]: Leaving directory '/root/dep/smo/bin/smo-deploy/smo-oom/kubernetes'
Makefile:46: recipe for target 'robot' failed
make: *** [robot] Error 2
How can I do ? Thank you,pls.
Tim
Anonymous
Have you performed the 2nd step?
Anonymous
Yes. But any expected respond should I recieve ?
When I executed 2nd step, terminal stuck and show this message .
Regenerating index. This may take a moment.
Now serving you on 127.0.0.1:8879
Should I keep waiting?
Thanks for help.
Anonymous
Yes. You can just put 'sudo helm serve' in one terminal, and it's the helm SERVER; and go on with the 'sudo make all' in another terminal.
Anonymous
Wow, Thank you for your help
I think my problem has been resolved .
Anonymous
I did all the 3 steps: still, I am getting the same error
root@xxxx:~/dep/smo/bin# sudo ./install
===> Starting at Tue Sep 8 07:42:45 UTC 2020
===> Cleaning up any previous deployment
======> Deleting all Helm deployments
Error: command 'delete' requires a release name
======> Clearing out all ONAP deployment resources
Error from server (NotFound): namespaces "onap" not found
No resources found.
error: resource(s) were provided, but no name, label selector, or --all flag specified
error: resource(s) were provided, but no name, label selector, or --all flag specified
======> Clearing out all RICAUX deployment resources
Error from server (NotFound): namespaces "ricaux" not found
Error from server (NotFound): namespaces "ricinfra" not found
======> Clearing out all NONRTRIC deployment resources
Error from server (NotFound): namespaces "nonrtric" not found
======> Preparing for redeployment
node/ebtic07 not labeled
node/ebtic07 not labeled
node/ebtic07 not labeled
======> Preparing working directory
===> Deploying OAM (ONAP Lite)
======> Deploying ONAP-lite
Error: unknown command "deploy" for "helm"
Run 'helm --help' for usage.
======> Waiting for ONAP-lite to reach operatoinal state
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
Anonymous
Can anyone please help me here. The tiller pods is also up and running
root@xxx07:~/.helm/plugins/deploy# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5d4dd4b4db-972pr 1/1 Running 1 26m
kube-system coredns-5d4dd4b4db-hrhkn 1/1 Running 1 26m
kube-system etcd-ebtic07 1/1 Running 1 25m
kube-system kube-apiserver-ebtic07 1/1 Running 1 25m
kube-system kube-controller-manager-ebtic07 1/1 Running 1 26m
kube-system kube-flannel-ds-2z2vx 1/1 Running 1 26m
kube-system kube-proxy-5shnx 1/1 Running 1 26m
kube-system kube-scheduler-ebtic07 1/1 Running 1 25m
kube-system tiller-deploy-666f9c57f4-rl2g9 1/1 Running 1 25m
Cedric Morin
I had a similar error when I tried to re-install SMO after first installation (~/dep/smo/bin/uninstall && ~/dep/smo/bin/install).
The only solution I found was to delete all the dep/ folder and restart the installation process from scatch. It worked, but since Onap helm charts had to be gneerated again it took hours.
Robbie Williamson
How did it work? The ~/dep/smo/bin/install script has an invalid helm command on line 185, from what I can tell:
'helm deploy' isn't a supported command.
Cedric Morin
Indeed, deploy is not a vanilla helm command, it is a plugin developed by ONAP (https://wiki.onap.org/display/DW/OOM+Helm+%28un%29Deploy+plugins).
Also, it seems that the re-installation problem may come from the fact that install operations are performed directly after uninstall ones, while some modules may not be fully uninstalled yet.
You may try to add a sleep 20 just before the helm deploy command in the install script.
Lao Alex
kube-system coredns-5d4dd4b4db-pvq4m 1/1 Running 1 6d21h
kube-system coredns-5d4dd4b4db-wwdsb 1/1 Running 1 6d21h
kube-system etcd-smo 1/1 Running 1 6d21h
kube-system kube-apiserver-smo 1/1 Running 2 6d21h
kube-system kube-controller-manager-smo 1/1 Running 14 6d21h
kube-system kube-flannel-ds-vrgwh 1/1 Running 1 6d21h
kube-system kube-proxy-cjhdn 1/1 Running 1 6d21h
kube-system kube-scheduler-smo 1/1 Running 15 6d21h
kube-system tiller-deploy-6658594489-h6zwt 1/1 Running 1 6d21h
2. helm server is running:
tcp 0 0 127.0.0.1:8879 0.0.0.0:* LISTEN 32431/helm
3."helm search onap" has 35 charts listed:
root@smo:/home/hongqun/oran/dep/smo/bin/smo-deploy/smo-oom# helm search onap | wc -l
35
4. when I run install script, I got the folloing errors:
======> Deploying ONAP-lite
fetching local/onap
Error: chart "onap" matching version "" not found in local index. (try 'helm repo update'). no chart name found
mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/charts/*': No such file or directory
mv: cannot stat '/root/.helm/plugins/deploy/cache/onap-subcharts/common': No such file or directory
rm: cannot remove '/root/.helm/plugins/deploy/cache/onap/requirements.lock': No such file or directory
mv: cannot stat '/root/.helm/plugins/deploy/cache/onap/requirements.yaml': No such file or directory
Error: no Chart.yaml exists in directory "/root/.helm/plugins/deploy/cache/onap"
release "dev" deployed
Could you give me some advices? Thank you.
Naga chetan K M
Hi Alex,
Can you try "helm deploy" command you might get deploy plugin not found. if so you need to do "helm plugin install ~/.helm/plugins/deploy" if deploy is not found then u can download them to ~/.helm/plugins/ from (
https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/helm/plugins;hb=refs/heads/master
) and then try the install command again.
Thanks,
Naga chetan
Lao Alex
Hi Naga chetan K M:
Thank you. But the deploy and undeploy plugins are found, charts not found in cache folder.
root@smo:~/.helm/plugins# tree .
.
├── deploy
│ ├── cache
│ │ ├── onap
│ │ │ ├── computed-overrides.yaml
│ │ │ └── logs
│ │ │ ├── dev.log
│ │ │ └── dev.log.log
│ │ └── onap-subcharts
│ ├── deploy.sh
│ └── plugin.yaml
└── undeploy
├── plugin.yaml
└── undeploy.sh
6 directories, 7 files
Thanks.
Naga chetan K M
Hi Lao,
If you are not seeing charts then you can try the below procedure, this is already listed in comments,
make sure 'helm search onap' has 35 charts listed. If not, you can manually issue the 'make all' command:
dabs@smobronze:~/oran/dep/smo/bin/smo-deploy/smo-oom/kubernetes$ sudo make all
Thanks,
Naga chetan
Javi G
Hi, I think SMO didn't deploy correctly first time, so the corresponding folders where not copied to the .helm folder. Try deleting smo-deploy folder and installing it again, like that it will start the installation from scratch. It can take several hours.
Makarand Kulkarni
What I found in my case is the version of helm that caused the issue - In step 2, I need to set the version as 2.16.6 in infra.rc and things work fine. This is evident from the name of the script "
Chirag Gandhi
Step 3 was going wrong. Helm was not getting initialized due to proxy server. Installed certificates and got it working.
Thanks for the help
Anonymous
我也遇到这个了则么解决的
Anonymous
Hello experts ,
I followed it step by step but I stuck by step 3 .
When I Run the script, k8s-1node-cloud-init-k ... .sh, I received the following message.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I don't know if I did something wrong or if I missed something I need to pay attention .
How can I fix it, Thank you .
Tim.
Anonymous
I met exactly the same problem. Appreciate you for the help!
Anonymous
can you pls share full debug out when running the script?
Anonymous
The whole situation as you can see below.
++ eval 'kubectl get pods -n kube-system | grep "Running" | wc -l'
+++ wc -l
+++ grep Running
+++ kubectl get pods -n kube-system
The connection to the server localhost:8080 was refused - did you specify the right host or port?
+ NUMPODS=0
+ echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
+ '[' 0 -lt 8 ']'
+ sleep 5
Tim.
Anonymous
Hi,
do you have following info in the outputs? The 'localhost:8080 refused' problem generally relates to the KUBECONFIG configuration.
Anonymous
But there is no admin.conf under /etc/kubernetes/
Anonymous
suggest re-run the script and make sure 'kubeadm init' is successful:
Anonymous
'kubeadm init' has the following error:
I0818 01:09:09.191921 32088 version.go:248] remote version is much newer: v1.18.8; falling back to: stable-1.15
[init] Using Kubernetes version: v1.15.12
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
Anonymous
disable swap and re-run the script.
Jensen Lin
Thank you for kind help! But after disabling swap and re-running the script, 'kubeadm init' has the new error:
I0819 12:54:05.234698 10134 version.go:251] remote version is much newer: v1.18.8; falling back to: stable-1.16
[init] Using Kubernetes version: v1.16.14
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.6. Latest validated version: 18.09
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10251]: Port 10251 is in use
[ERROR Port-10252]: Port 10252 is in use
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
Anonymous
it seems that you already have an incomplete k8s cluster, try:
turn off swap and re-run the script, 'sudo ./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh', which will perform 'kubeadm reset' to clear any existing k8s cluster.
Anonymous
Hi Jensen,
I am facing the exact same issue. I know that this is after a very long time, but did you figure out the solution for it? your help would be much appreciated.
Javi G
Check for errors in the script before the localhost error, in my case it has a docker version error, i had to change the version in the k8s-1node-cloud-init-k_1_15-h_2_17-d_19_03.sh file.
I have this versions in the file:
Also for this docker version you have to change the ubuntu version to "18.04.02" in the file, if not it won't find the file in the repository:
Anonymous
Hello there,
In step-2, when I execute the script, "gen-cloud-init.sh" after editing the 'infra.rc' file, it throws "cannot execute binary file" output as follows:
./gen-cloud-init.sh: line 39: source: /home/airvana/dep/tools/k8s/bin/../etc/.infra.rc.swo: cannot execute binary file
./gen-cloud-init.sh: line 39: source: /home/airvana/dep/tools/k8s/bin/../etc/.infra.rc.swl: cannot execute binary file
./gen-cloud-init.sh: line 39: source: /home/airvana/dep/tools/k8s/bin/../etc/.infra.rc.swp: cannot execute binary file
./gen-cloud-init.sh: line 39: source: /home/airvana/dep/tools/k8s/bin/../etc/.infra.rc.swn: cannot execute binary file
./gen-cloud-init.sh: line 39: source: /home/airvana/dep/tools/k8s/bin/../etc/.infra.rc.swm: cannot execute binary file
If I assume that is normal and move on to step-3 to execute "k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh", It throws the following errors:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
+ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
And then following,
+++ kubectl get pods -n kube-system
+++ grep Running
+++ wc -l
The connection to the server localhost:8080 was refused - did you specify the right host or port?
+ NUMPODS=0
+ echo '> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]'
> waiting for 0/8 pods running in namespace [kube-system] with keyword [Running]
+ '[' 0 -lt 8 ']'
I am a novice in these technologies, your help would be much appreciated.
Thank you.
Anonymous
Hi! I have a similar problem,
error execution phase preflight: docker is required for container runtime: exec: "docker": executable file not found in $PATH
+ cd /root
+ rm -rf .kube
+ mkdir -p .kube
+ cp -i /etc/kubernetes/admin.conf /root/.kube/config
cp: no se puede efectuar `stat' sobre '/etc/kubernetes/admin.conf': No existe el archivo o el directorio
+ chown root:root /root/.kube/config
chown: no se puede acceder a '/root/.kube/config': No existe el archivo o el directorio
+ export KUBECONFIG=/root/.kube/config
+ KUBECONFIG=/root/.kube/config
+ echo KUBECONFIG=/root/.kube/config
+ kubectl get pods --all-namespaces
The connection to the server localhost:8080 was refused - did you specify the right host or port?
if someone can help me, it would be incredible!
thanks!
Chirag Gandhi
Seems like docker is not getting installed.. Recently, I am facing an issue installing docker.io 18.09.7 over ubuntu 18.04.4... try changing docker version to empty ("") instead of 18.09 in infra.rc file.. It should run fine but not sure if you will face any other issues later on, as it is not tested
Anonymous
Please make sure your VM has more than 2 CPUs, otherwise the docker.io will not be installed at all. I have tried ubuntu 18.04.1 to 18.04.4, the ubuntu version is not the problem. Go with latest docker can help you get the 35 charts, but then you will find the ONAP-lite cannot reach operational state. I feel that we need to wait for an update...
Javi G
Use helm version 2.17.0, it doesn't give this error anymore.
You can change it in the k8s... file in this line:
echo
"2.17.0"
>
/opt/config/helm_version
.txt
or in the "infra.rc" file in step 2 and repeat the steps to generate the new k8s file.
Chirag Gandhi
Hi,
After deploying SMO, how can I open O1 dashboard? I tried opening http://localhost:8181/odlux/index.html but receiving "connection refused". Can anyone please help?
Daniel Camps Mur
Dear all,
I am getting the following error:
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Error: error initializing: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden
It seems some URLs may not be valid anymore?
Any help would be appreciated.
BR
Daniel
Daniel Camps Mur
This error can be fixed by using helm 2.17.0 as indicated by Javi G below.
BR
Daniel
Anonymous
Hi
If any of us as the normal users (out of this PL team) can really make this up and running?!???????
Full of errors and even more errors after checking and applying this cummunity "answers"
Very disapointed, Sorry but this is true !!!
Javi G
Yeah a lot of errors but finally got it working... Which errors do you have?
Anonymous
Its: " waiting for 0/8 pods running in namespaces.....
... Sleep 5"
I think it has been sleft for weeks, not 5 secs
Anonymous
Good Updates: After >10 times, I finally came up with this combination, and it worked for me:
Use: "19.03.6" for docker: echo
"19.03.6"
>
/opt/config/docker_version
.txt
Use: "2.17.0" for heml
### (Note that: I entered: 18.04.3 even though my Ubuntu Desktop is: 18.04.2)DOCKERVERSION=
"${DOCKERV}-0ubuntu1~18.04.3
Thank you Javi G.
I hope the new comers can make this works, or try to use the above combination to make it work
Pavan Gupta
Hi,
I am following deployment steps for SMO. Upon running the k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh script, I come across the following error. Kindly share the solution if anyone has found it.
E: Version '18.09.7-0ubuntu1~18.04.4' for 'docker.io' was not found
+ cat
./k8s-1node-cloud-init-k_1_15-h_2_16-d_18_09.sh: line 156: /etc/docker/daemon.json: No such file or directory
+ mkdir -p /etc/systemd/system/docker.service.d
+ systemctl enable docker.service
Failed to enable unit: Unit file docker.service does not exist.
Javi G
Change this in the k8s... file:
echo
"19.03.6"
>
/opt/config/docker_version
.txt
Also for this docker version you have to change the ubuntu version to "18.04.02" in the file, if not it won't find the file in the repository:
elif [[ ${UBUNTU_RELEASE} ==
18
.* ]]; then
echo
"Installing on Ubuntu $UBUNTU_RELEASE (Bionic Beaver)"
if
[ ! -z
"${DOCKERV}"
]; then
DOCKERVERSION=
"${DOCKERV}-0ubuntu1~18.04.3"
#THIS IS THE LINE I MODIFIED
Not every docker version can be found in the repository, so i found that version 19.03.6 for ubuntu 18.04.3 is available, and works well. The 18.04.2 worked before but throws another error now.
Pavan Gupta
Many thanks that worked.
Pavan Gupta
Hello,
I have come across the following error while installing SMO. if anyone has a solution for it, kindly share the same.
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Error: error initializing: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden
+ helm init -c
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Error: error initializing: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Failed to fetch https://kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden
Regards,
Pavan
Travis Machacek
I ran into this issue as well and this command solved it:
Pavan Gupta
Hi Travis, I have tried your command '
helm init --stable-repo-url=https://charts.helm.sh/stable --client-only' but face the same issue.Is it required to rerun everything from the start?
Travis Machacek
I believe so, yes. If that command still does not work, try another of the commands mentioned in the link I posted. I'm assuming this happened after running the k8 script?
Pavan Gupta
Yes, this happened after running the k8s script. I am trying the second command mentioned in the link, will update.
Pavan Gupta
Hi Travis, did u run this cmd separately or k8s script was modified? If so, kindly share the change made in the k8s file.
Travis Machacek
It was run separately.
Pavan Gupta
Using helm version 2.17.0 worked. Thank you for your help.
Javi G
Try using helm version 2.17.0.
You can change it in the k8s... file in this line:
echo
"2.17.0"
>
/opt/config/helm_version
.txt
or in the "infra.rc" file in step 2 and repeat the steps to generate the new k8s file.
Pavan Gupta
Thank you Javi, that worked.
Daniel Camps Mur
Dear all,
After following the indicated steps (using helm 2.17.0 to avoid an error), the deployment of dev-sdnc fails and I get stuck here:
======> Deploying ONAP-lite
fetching local/onap
release "dev" deployed
release "dev-consul" deployed
release "dev-dmaap" deployed
release "dev-mariadb-galera" deployed
release "dev-msb" deployed
release "dev-sdnc" deployed
dev-sdnc 1 Sat Jan 9 12:18:06 2021 FAILED sdnc-6.0.0 onap
======> Waiting for ONAP-lite to reach operatoinal state
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
0/7 SDNC-SDNR pods and 0/7 Message Router pods running
...
Any help would be appreciated.
BR
Daniel
Nagachetan Km
Hi Daniel,
Please can you check whether docker image is getting pulled or not (kubectl describe po <pod name> -n onap), I faced the same issue and saw docker image was not getting pulled because for every 6 hours only 100 pulls are allowed in docker ce and k8s exceeds those many tries sometimes.
Thanks,
Naga chetan
Daniel Camps Mur
Hi Nega,
There seems to be an error when pulling the image. In fact for many of the onap services.
I tried restarting the installation process from scracth and it failed again the same place. How did you manage to solve it?
This is what I get when I list the pods.
onap dev-consul-68d576d55c-t4cp9 1/1 Running 0 124m
onap dev-consul-server-0 1/1 Running 0 124m
onap dev-consul-server-1 1/1 Running 0 123m
onap dev-consul-server-2 1/1 Running 0 122m
onap dev-kube2msb-9fc58c48-qlw5q 0/1 Init:ErrImagePull 0 124m
onap dev-mariadb-galera-0 0/1 Init:ImagePullBackOff 0 124m
onap dev-message-router-0 0/1 Init:ImagePullBackOff 0 124m
onap dev-message-router-kafka-0 0/1 Init:ErrImagePull 0 124m
onap dev-message-router-kafka-1 0/1 Init:ImagePullBackOff 0 124m
onap dev-message-router-kafka-2 0/1 Init:ImagePullBackOff 0 124m
onap dev-message-router-zookeeper-0 0/1 Init:ImagePullBackOff 0 124m
onap dev-message-router-zookeeper-1 0/1 ImagePullBackOff 0 124m
onap dev-message-router-zookeeper-2 0/1 ErrImagePull 0 124m
onap dev-msb-consul-65b9697c8b-ldpqp 1/1 Running 0 124m
onap dev-msb-discovery-54b76c4898-2z7vp 0/2 ErrImagePull 0 124m
onap dev-msb-eag-76d4b9b9d7-l5m75 0/2 Init:ErrImagePull 0 124m
onap dev-msb-iag-65c59cb86b-lsjkl 0/2 Init:ErrImagePull 0 124m
onap dev-sdnc-0 0/2 Init:ErrImagePull 0 124m
onap dev-sdnc-db-0 0/1 Init:ImagePullBackOff 0 124m
onap dev-sdnc-dmaap-listener-5c77848759-r2fnp 0/1 Init:ImagePullBackOff 0 124m
onap dev-sdnc-sdnrdb-init-job-dmq6r 0/1 Init:Error 0 124m
onap dev-sdnc-sdnrdb-init-job-wlvqp 0/1 Init:ImagePullBackOff 0 102m
onap dev-sdnrdb-coordinating-only-9b9956fc-fzwbh 0/2 Init:ImagePullBackOff 0 124m
onap dev-sdnrdb-master-0 0/1 Init:ErrImagePull 0 124m
Output of kubectl describe po dev-sdnc-0 -n onap:
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 23m (x59 over 104m) kubelet, vagrant Back-off pulling image "oomk8s/readiness-check:2.2 .1"
Warning Failed 15m (x12 over 104m) kubelet, vagrant Error: ErrImagePull
Warning Failed 13m (x78 over 104m) kubelet, vagrant Error: ImagePullBackOff
Warning Failed 4m56s (x13 over 104m) kubelet, vagrant Failed to pull image "oomk8s/readiness-check:2.2.1 ": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io: Temporary failure in name resolution
BR
Daniel
Chris Ko
I had the exact same logs. Then I found out those 'release "xxx" deployed' were not deployed at all. You might want to check deploy logs, which located in ~/.helm/plugin/deploy/cache/*/logs . In my case, it complains something about 'no matches for kind "StatefulSet" in version'. It was solved after I switch K8S version to 1.15.9.
This is infra.rc works for me:
INFRA_DOCKER_VERSION=""
INFRA_HELM_VERSION="2.17.0"
INFRA_K8S_VERSION="1.15.9"
INFRA_CNI_VERSION="0.7.5"
I left docker version blank so I don't have to edit the script as mentioned above. 19.03.6 for ubuntu 18.04.3 is installed at this time.
Daniel Camps Mur
Thanks Chris. I tried a fresh installation with your infra.rc configuration but still got the same error.
BR
Daniel
Naga chetan K M
Hi Daniel,
The last error says it has name resolution failure, you can fix this by adding "nameserver 8.8.8.8" in /etc/resolv.conf file.
Thanks,
Naga chetan
Nagachetan Km
Hi All,
Is there a slack group for discussions ?
Thanks,
Naga chetan
Pavan Gupta
Hello,
SMO installation was successful. I have run it on AWS instance. Any idea how one can check out SMO functioning? I mean any curl command or web page that can be tried?
Pavan
Javi G
Hi, I'm not sure about that. What I did was to deploy a RIC in another VM and configure it to work with the nonrtric, like that you can know that both nonrtric and ric are working...
Gaurav jain
Hi Pavan,
What you want to check in SMO ? Normally we are using DMAAP and VES in SMO. That can be curled and checked
Anonymous
Good Updates: After >10 times, I finally came up with this combination, and it worked for me:
Use: "19.03.6" for docker: echo
"19.03.6"
>
/opt/config/docker_version
.txt
Use: "2.17.0" for heml
### (Note that: I entered: 18.04.3 even though my Ubuntu Desktop is: 18.04.2)DOCKERVERSION=
"${DOCKERV}-0ubuntu1~18.04.3
Thank you Javi G.
I hope the new comers can make this works, or try to use the above combination to make it work
Anonymous
After Step 1 and Before Step #2 Change the following files
dep/tools/k8s/etc/infra.rc
INFRA_DOCKER_VERSION="19.03.6"
INFRA_HELM_VERSION="2.17.0"
INFRA_K8S_VERSION="1.15.9"
INFRA_CNI_VERSION="0.7.5"
dep/tools/k8s/heat/scripts/k8s_vm_install.sh
.
.
.
.
.
.
.
elif [[ ${UBUNTU_RELEASE} == 18.* ]]; then
echo "Installing on Ubuntu $UBUNTU_RELEASE (Bionic Beaver)"
if [ ! -z "${DOCKERV}" ]; then
DOCKERVERSION="${DOCKERV}-0ubuntu1~18.04.3"
fi
.
.
dep/smo/bin/install
.
.
if [ "$1" == "initlocalrepo" ]; then
echo && echo "===> Initialize local Helm repo"
rm -rf ~/.helm #&& helm init -c # without this step helm serve may not work.
helm init --stable-repo-url=https://charts.helm.sh/stable --client-only
helm serve &
helm repo add local http://127.0.0.1:8879
fi
.
.
Then run Step #2, Step #3 and Step #4 normally.
Anonymous
OS was 18.04.5 LTS (Bionic Beaver)
Anonymous
Can anyone merge these changes to github
Anonymous
Hi,
I need help on running dcaemod using SMO Lite . Can i know what all other ONAP components should be enabled for running dcaemod ? Any kind of dependency on other modules of ONAP
Anonymous
I've started to deploy SMO with ubuntu 20.04 desktop but an error ocurrs when try to install smo.
It's not allow this last ubuntu version?
Pavan Gupta
I have tried with Ubuntu 18.04 and it had worked. Try switching to this version or share the error it reports.
Anonymous
hi every,
i execute this instruction,kubectl get pods --all-namespaces,
but i got one place strange, I have tag it into red .
Could anyone help me to solve this problem?
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5d4dd4b4db-82gnh 1/1 Running 2 128m
kube-system coredns-5d4dd4b4db-8mrqh 1/1 Running 2 128m
kube-system etcd-smo-virtualbox 1/1 Running 2 127m
kube-system kube-apiserver-smo-virtualbox 1/1 Running 2 127m
kube-system kube-controller-manager-smo-virtualbox 1/1 Running 2 127m
kube-system kube-flannel-ds-9xr8x 1/1 Running 2 128m
kube-system kube-proxy-lwmnh 1/1 Running 2 128m
kube-system kube-scheduler-smo-virtualbox 1/1 Running 2 127m
kube-system tiller-deploy-7c54c6988b-jgrqg 1/1 Running 2 127m
nonrtric a1-sim-osc-0 1/1 Running 0 16m
nonrtric a1-sim-osc-1 1/1 Running 0 4m47s
nonrtric a1-sim-std-0 1/1 Running 0 16m
nonrtric a1-sim-std-1 1/1 Running 0 5m6s
nonrtric a1-sim-std2-0 1/1 Running 0 16m
nonrtric a1-sim-std2-1 1/1 Running 0 15m
nonrtric a1controller-64c4f59fb5-tdllp 1/1 Running 0 16m
nonrtric controlpanel-78f957844b-nb5qc 1/1 Running 0 16m
nonrtric db-549ff9b4d5-58rmc 1/1 Running 0 16m
nonrtric enrichmentservice-7559b45fd-2wb7p 1/1 Running 0 16m
nonrtric nonrtricgateway-6478f59b66-wjjkz 1/1 Running 0 16m
nonrtric policymanagementservice-79f8f76d8f-ch46v 1/1 Running 0 16m
nonrtric rappcatalogueservice-5945c4d84b-7w7cc 1/1 Running 0 16m
onap dev-consul-68d576d55c-l2n2q 1/1 Running 0 53m
onap dev-consul-server-0 1/1 Running 0 53m
onap dev-consul-server-1 1/1 Running 0 52m
onap dev-consul-server-2 1/1 Running 0 52m
onap dev-kube2msb-9fc58c48-s5g98 1/1 Running 0 52m
onap dev-mariadb-galera-0 1/1 Running 0 53m
onap dev-mariadb-galera-1 1/1 Running 0 46m
onap dev-mariadb-galera-2 1/1 Running 0 39m
onap dev-message-router-0 1/1 Running 0 53m
onap dev-message-router-kafka-0 1/1 Running 0 53m
onap dev-message-router-kafka-1 1/1 Running 0 53m
onap dev-message-router-kafka-2 1/1 Running 0 53m
onap dev-message-router-zookeeper-0 1/1 Running 0 53m
onap dev-message-router-zookeeper-1 1/1 Running 0 53m
onap dev-message-router-zookeeper-2 1/1 Running 0 53m
onap dev-msb-consul-65b9697c8b-6hhck 1/1 Running 0 52m
onap dev-msb-discovery-54b76c4898-xz7rq 2/2 Running 0 52m
onap dev-msb-eag-76d4b9b9d7-jfms9 2/2 Running 0 52m
onap dev-msb-iag-65c59cb86b-864jr 2/2 Running 0 52m
onap dev-sdnc-0 2/2 Running 0 52m
onap dev-sdnc-db-0 1/1 Running 0 52m
onap dev-sdnc-dmaap-listener-5c77848759-svmzg 1/1 Running 0 52m
onap dev-sdnc-sdnrdb-init-job-d6rjm 0/1 Completed 0 41m
onap dev-sdnc-sdnrdb-init-job-qkdl8 0/1 Init:Error 0 52m
onap dev-sdnrdb-coordinating-only-9b9956fc-zjqkg 2/2 Running 0 52m
onap dev-sdnrdb-master-0 1/1 Running 0 52m
onap dev-sdnrdb-master-1 1/1 Running 0 40m
onap dev-sdnrdb-master-2 1/1 Running 0 38m
ricaux bronze-infra-kong-68657d8dfd-d5bb4 2/2 Running 1 4m57s
ricaux deployment-ricaux-ves-65db844758-td2zc 1/1 Running 0 4m47s
BR,
Tim
Anonymous
There seems to be a mismatch in the VM minimum requirements on this page and the PDF linked in that section. The RAM required is mentioned as 32 GB and the CPU cores as 8; however, the "Getting Started PDF" mentions the required RAM as 16 GB and CPU cores as 4. So how much minimum RAM and CPU cores does the SMO VM actually require? Also, if these are the minimum requirements, what are the standard or recommended VM requirements? I suspect that the Getting Started PDF for Near Realtime RIC installation has been mistakenly linked here. Could someone confirm this?
Thanks,
Vikas Krishnan
Add Comment