- Created by Ken Kristiansen, last modified by Lusheng Ji on Jun 22, 2020
VM Minimum Requirements for RIC 22 |
---|
NOTE: sudo access is required for installation Getting Started PDF |
Step 1: Obtaining the Deployment Scripts and Charts |
---|
Run ... $ sudo -i $ git clone http://gerrit.o-ran-sc.org/r/it/dep -b bronze $ cd dep |
Step 2: Generation of cloud-init Script |
---|
Run ... $ cd tools/k8s/bin Note: The outputted script is will be used for preparing K8 cluster for RIC deployment (k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh) |
Step 3: Installation of Kubernetes, Helm, Docker, etc. |
---|
Run ... $ ./k8s-1node-cloud-init-k_1_16-h_2_12-d_cur.sh NOTE: Be patient as this takes some time to complete. Upon completion of this script, the VM will be rebooted. You will then need to login to the VM and run sudo once again. $ sudo -i $ kubectl get pods --all-namespaces # There should be 9 pods running in kube-system namespace. |
Step 4: Deploy RIC using Recipe |
---|
Run ... $ cd dep/bin |
Step 5: Onboarding a Test xAPP (HelloWorld xApp) |
---|
NOTE: If using a version less than Ubuntu 18.04 this section will fail! $ cd dep $ curl --location --request POST "http://$(hostname):32080/onboard/api/v1/onboard/download" --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"
|
Step 6: Deploy Test xApp (HelloWorld xApp) |
---|
Run.. # Verify xApp is not running... This may take a minute so refresh the command below $ kubectl get pods -n ricxapp # Call xApp Manager to deploy HelloWorld xApp... $ curl --location --request POST "http://$(hostname):32080/appmgr/ric/v1/xapps" --header 'Content-Type: application/json' --data-raw '{"xappName": "hwxapp"}' # Verify xApp is running... $ kubectl get pods -n ricxapp # View logs... $ kubectl logs -n ricxapp <name of POD retrieved from statement above> |
Helpful Hints |
---|
Kubectl commads: kubectl get pods -n nampespace - gets a list of Pods running kubectl get logs -n namespace name_of_running_pod |
Complete these tasks to get started
Recent space activity
-
-
Running A1 and O1 Use Case Flows commented Feb 27, 2021
-
SMO Installation commented Feb 26, 2021
Anonymous -
-
-
Hello World xApp Use Case Flows commented Feb 25, 2021
-
-
-
Running A1 and O1 Use Case Flows commented Feb 25, 2021
Anonymous -
-
-
Getting Started commented Feb 24, 2021
-
Space contributors
- No labels
32 Comments
Gopalasingham Aravinthan
Thanks for the guidelines. It works well.
Zhengwei Gao
Hi experts,
after deployment, i have only 15 pods in ns ricplt, is it normal?
The "Step 4" says: "kubectl get pods -n ricplt # There should be ~16 pods running in the ricplt namespace.".
(18:11 dabs@ricpltbronze bin) > sudo kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5644d7b6d9-clvtf 1/1 Running 5 32h
kube-system coredns-5644d7b6d9-jwxnm 1/1 Running 5 32h
kube-system etcd-ricpltbronze 1/1 Running 11 32h
kube-system kube-apiserver-ricpltbronze 1/1 Running 28 32h
kube-system kube-controller-manager-ricpltbronze 1/1 Running 9 32h
kube-system kube-flannel-ds-amd64-mrwn2 1/1 Running 16 32h
kube-system kube-proxy-zrtl8 1/1 Running 6 32h
kube-system kube-scheduler-ricpltbronze 1/1 Running 8 32h
kube-system tiller-deploy-68bf6dff8f-wbmwl 1/1 Running 4 32h
ricinfra deployment-tiller-ricxapp-d4f98ff65-6h4n4 1/1 Running 0 3h13m
ricinfra tiller-secret-generator-tgkzf 0/1 Completed 0 132m
ricinfra tiller-secret-generator-zcx72 0/1 Error 0 3h13m
ricplt deployment-ricplt-a1mediator-66fcf76c66-h6rp2 1/1 Running 0 40m
ricplt deployment-ricplt-alarmadapter-64d559f769-glb5z 1/1 Running 0 30m
ricplt deployment-ricplt-appmgr-6fd6664755-2mxjb 1/1 Running 0 42m
ricplt deployment-ricplt-e2mgr-8479fb5ff8-9zqbp 1/1 Running 0 41m
ricplt deployment-ricplt-e2term-alpha-bcb457df4-4dz62 1/1 Running 0 40m
ricplt deployment-ricplt-jaegeradapter-84558d855b-tmqqb 1/1 Running 0 39m
ricplt deployment-ricplt-o1mediator-d8b9fcdf-f4sgm 1/1 Running 0 34m
ricplt deployment-ricplt-rtmgr-9d4847788-kf6r4 1/1 Running 10 41m
ricplt deployment-ricplt-submgr-65dc9f4995-gt5kb 1/1 Running 0 40m
ricplt deployment-ricplt-vespamgr-7458d9b5d-klh9l 1/1 Running 0 39m
ricplt deployment-ricplt-xapp-onboarder-546b86b5c4-xkcpt 2/2 Running 0 42m
ricplt r4-infrastructure-kong-6c7f6db759-7xjqm 2/2 Running 21 3h13m
ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-jfkdg 2/2 Running 2 3h13m
ricplt r4-infrastructure-prometheus-server-5fd7695-pprg2 1/1 Running 2 3h13m
ricplt statefulset-ricplt-dbaas-server-0 1/1 Running 0 43m
Anonymous
After deployment I'm also getting only 15 pods running. Is it normal or do I need to worry about that?
Zhengwei Gao
it's normal, and you are good to go. just try policy management.
Chirag Gandhi
Hi,
Even though deployment of RIC plt did not give any errors, most of the RIC plt and infra pods remain in error state.
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5644d7b6d9-dz774 1/1 Running 1 10h
kube-system coredns-5644d7b6d9-fp586 1/1 Running 1 10h
kube-system etcd-ggnlabvm-bng35 1/1 Running 1 10h
kube-system kube-apiserver-ggnlabvm-bng35 1/1 Running 1 10h
kube-system kube-controller-manager-ggnlabvm-bng35 1/1 Running 1 10h
kube-system kube-flannel-ds-amd64-b4l97 1/1 Running 1 10h
kube-system kube-proxy-fxfrk 1/1 Running 1 10h
kube-system kube-scheduler-ggnlabvm-bng35 1/1 Running 1 10h
kube-system tiller-deploy-68bf6dff8f-jvtk7 1/1 Running 1 10h
ricinfra deployment-tiller-ricxapp-d4f98ff65-hcnqf 0/1 ContainerCreating 0 9h
ricinfra tiller-secret-generator-d7kmk 0/1 ImagePullBackOff 0 9h
ricplt deployment-ricplt-a1mediator-66fcf76c66-l2z7m 0/1 ImagePullBackOff 0 9h
ricplt deployment-ricplt-alarmadapter-64d559f769-7d8fq 0/1 ImagePullBackOff 0 9h
ricplt deployment-ricplt-appmgr-6fd6664755-7lp8q 0/1 Init:ImagePullBackOff 0 9h
ricplt deployment-ricplt-e2mgr-8479fb5ff8-ggnx8 0/1 ErrImagePull 0 9h
ricplt deployment-ricplt-e2term-alpha-bcb457df4-dkdbc 0/1 ImagePullBackOff 0 9h
ricplt deployment-ricplt-jaegeradapter-84558d855b-bpzcv 1/1 Running 0 9h
ricplt deployment-ricplt-o1mediator-d8b9fcdf-5ptcs 0/1 ImagePullBackOff 0 9h
ricplt deployment-ricplt-rtmgr-9d4847788-rvnrx 0/1 ImagePullBackOff 0 9h
ricplt deployment-ricplt-submgr-65dc9f4995-cbhvc 0/1 ImagePullBackOff 0 9h
ricplt deployment-ricplt-vespamgr-7458d9b5d-bkzpg 0/1 ImagePullBackOff 0 9h
ricplt deployment-ricplt-xapp-onboarder-546b86b5c4-g2dnt 1/2 ImagePullBackOff 0 9h
ricplt r4-infrastructure-kong-6c7f6db759-4czbj 2/2 Running 2 9h
ricplt r4-infrastructure-prometheus-alertmanager-75dff54776-gwxzz 2/2 Running 0 9h
ricplt r4-infrastructure-prometheus-server-5fd7695-phcqs 1/1 Running 0 9h
ricplt statefulset-ricplt-dbaas-server-0 0/1 ImagePullBackOff 0 9h
any clue?
Anonymous
The ImagePullBackOff and ErrrImagePull errors are all for container images built from O-RAN SC code. It appears that there is a problem for the docker engine in your installation fetching images from the O-RAN SC docker registry. Oftentimes this is due to local firewall blocking such connections.
You may want to try: docker pull nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-a1:2.1.9 to see if your docker engine can retrieve this image.
Chirag Gandhi
Thanks for the information.
I think you are right, even docker pull nexus3.o-ran-sc.org:10002/o-ran-sc/ric-plt-a1:2.1.9 is giving an error of connection refused..
although I am checking for the corporate firewall to disable it, the strange thing is docker pull hello-world and other such pulls are working fine.
Lusheng Ji
"Connection refused" error does suggest network connection problem.
These O-RAN SC docker registries use port 10001 ~ 10004, in particular 10002 for all released docker images. They are not the default docker registry of 5000. It is possible that your local firewall has a rule allowing outgoing connections into port 5000, but not these ports.
Anonymous
Thanks for the explanation. you are right these ports were blocked on firewall, working fine now.
currently i am stuck with opening o1 dashboard, trying it on smo machine.
Chirag Gandhi
thanks for the explanation. you are right these particular ports were getting blocked by firewall, got it working now.
currently i am stuck in opening o1 dashboard, trying it on smo machine.
Anonymous
Hi all,
after executing the below POST command. Iam not getting any response.
# Start on-boarding process...
$ curl --location --request POST "http://$(hostname):32080/onboard/api/v1/onboard/download" --header 'Content-Type: application/json' --data-binary "@./onboard.hw.url"
any clues/suggestions?
Zhengwei Gao
it's probably caused by port 32080 has already been occupied by kube-proxy.
you can either try using direct ip address, or using port-forwarding as a workaround. pls refer to my post: https://blog.csdn.net/jeffyko/article/details/107426626
Anonymous
Thanks a lot, it resolved the issue and we are able to proceed further,
After executing the below command we are seeing the below logs. Is this the correct behavior?? In case this is the error, how to resolve this??
$ kubectl logs -n ricxapp <name of POD retrieved from statement above>
Error from server (BadRequest): container "hwxapp" in pod "ricxapp-hwxapp-684d8d675b-xz99d" is waiting to start: ContainerCreating
Zhengwei Gao
make sure hwxapp Pod is running. It needs pull the 'hwxapp' docker image, which may take some time.
kubectl get pods -n ricxapp | grep hwxapp
Anonymous
Hi Zhengwei, I have tried the solution in your blog, but I still receive the same error: AttributeError: 'ConnectionError' object has no attribute 'message'. Do you have any clues? Thanks.
Luca Lodigiani
EDIT: Nevermind, silly me. For anyone wondering, the port forwarding command needs to keep running for the forwarding to stay active. So i just run it in a Screen tab keep it running there, and run the curl commands in a different tab
Hi, I've tried this workaround but the port forwarding command just hangs and never completes. Anyone experiencing the same issue? Kube cluster and pods seem healthy:
root@ubuntu1804:~# kubectl -v=4 port-forward r4-infrastructure-kong-6c7f6db759-kkjtt 32088:32080 -n ricplt
Forwarding from 127.0.0.1:32088 -> 32080
(hangs after the previous message)
Anonymous
In my case, the script only works with helm 2.x. how about others?
Sandeep Shah
THANK You! This is great!
Anonymous
Hi all,
I use the source code to build the hw_unit_test container and I execute the hw_unit_test, it stops and returns "SHAREDDATALAYER_ABORT, file src/redis/asynchirediscommanddispatcher.cpp, line 206: Required Redis module extension commands not available.
Aborted (core dumped)".
Thus, I can't tset my hwxapp.
any clues/suggestions? Thanks!
Anonymous
Hi all,
There is a pod crash every time. When I run the following command, I got one crashed pod. Did you meet the same case? Thanks so much for giving me some suggestions.
root@chuanhao-HP-EliteDesk-800-G4-SFF:~/dep/bin# kubectl get pods -n ricplt
NAME READY STATUS RESTARTS AGE
deployment-ricplt-a1mediator-66fcf76c66-f6kbh 1/1 Running 1 2m16s
deployment-ricplt-alarmadapter-64d559f769-twfk7 1/1 Running 0 46s
deployment-ricplt-appmgr-6fd6664755-7rs4g 1/1 Running 0 3m49s
deployment-ricplt-e2mgr-8479fb5ff8-j9nzf 1/1 Running 0 3m
deployment-ricplt-e2term-alpha-bcb457df4-r22nb 1/1 Running 0 2m39s
deployment-ricplt-jaegeradapter-84558d855b-xfgd5 1/1 Running 0 78s
deployment-ricplt-o1mediator-d8b9fcdf-tpz7v 1/1 Running 0 64s
deployment-ricplt-rtmgr-9d4847788-scrxf 1/1 Running 1 3m26s
deployment-ricplt-submgr-65dc9f4995-knzjd 1/1 Running 0 113s
deployment-ricplt-vespamgr-7458d9b5d-mdmjx 1/1 Running 0 96s
deployment-ricplt-xapp-onboarder-546b86b5c4-z2qd6 2/2 Running 0 4m16s
r4-infrastructure-kong-6c7f6db759-44wdx 1/2 CrashLoopBackOff 6 4m52s
r4-infrastructure-prometheus-alertmanager-75dff54776-qlp4g 2/2 Running 0 4m52s
r4-infrastructure-prometheus-server-5fd7695-lr6z7 1/1 Running 0 4m52s
statefulset-ricplt-dbaas-server-0 1/1 Running 0 4m33s
Anand Kumar
Hi All,
I am trying to deploy RIC and at step 4 while run command "./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml". I am getting this below error:
root@ubuntu:~/dep/bin# ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml
Error: unknown command "home" for "helm"
Run 'helm --help' for usage.
Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz
cp: cannot create regular file '/repository/local/': No such file or directory
Successfully packaged chart and saved it to: /tmp/aux-common-3.0.0.tgz
cp: cannot create regular file '/repository/local/': No such file or directory
Successfully packaged chart and saved it to: /tmp/nonrtric-common-2.0.0.tgz
cp: cannot create regular file '/repository/local/': No such file or directory
Error: open /repository/local/index.yaml822716896: no such file or directory
Error: no repositories configured
Error: looks like "http://127.0.0.1:8879/charts" is not a valid chart repository or cannot be reached: Get "http://127.0.0.1:8879/charts/index.yaml": dial tcp 127.0.0.1:8879: connect: connection refused.
I am unable to resolve this issue. Could any one please help to resolve this.
Thanks,
Javi G
I had a different error but maybe it helps, I updated in the example_recipe.yaml file the helm version to 2.17.0.
I had this error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached:
sudipta singha
Try this . it might help
helm init --client-only --skip-refresh helm repo rm stable helm repo add stable https://charts.helm.sh/
Anonymous
Problem still exists
Max Erren
Hey all,
I am trying to deploy the RIC in it's Cherry-Release Version.
But i am facing some issues:
a)
The r4-alarmadater isnt getting released
b)
The pods aren't getting to their Running state. This might be connected to issue a) but I am not sure.
kubectl get pods --all-namspaces:
Anand Kumar
Hi All,
I am trying to install rmr_nng library as a requirement of ric-app-kpimon xapp.
https://github.com/o-ran-sc/ric-plt-lib-rmr
https://github.com/nanomsg/nng
but getting below error(snippet):
-- Installing: /root/ric-plt-lib-rmr/.build/lib/cmake/nng/nng-config-version.cmake
-- Installing: /root/ric-plt-lib-rmr/.build/bin/nngcat
-- Set runtime path of "/root/ric-plt-lib-rmr/.build/bin/nngcat" to "/root/ric-plt-lib-rmr/.build/lib"
[ 40%] No test step for 'ext_nng'
[ 41%] Completed 'ext_nng'
[ 41%] Built target ext_nng
Scanning dependencies of target nng_objects
[ 43%] Building C object src/rmr/nng/CMakeFiles/nng_objects.dir/src/rmr_nng.c.o
In file included from /root/ric-plt-lib-rmr/src/rmr/nng/src/rmr_nng.c:70:0:
/root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c: In function ‘roll_tables’:
/root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:406:28: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtgate’; did you mean ‘rtable’?
pthread_mutex_lock( ctx->rtgate ); // must hold lock to move to active
^~~~~~
rtable
/root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:409:30: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtgate’; did you mean ‘rtable’?
pthread_mutex_unlock( ctx->rtgate );
^~~~~~
rtable
/root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c: In function ‘parse_rt_rec’:
/root/ric-plt-lib-rmr/src/rmr/nng/../common/src/rt_generic_static.c:858:12: error: ‘uta_ctx_t {aka struct uta_ctx}’ has no member named ‘rtable_ready’; did you mean ‘rmr_ready’?
ctx->rtable_ready = 1; // route based sends can now happen
^~~~~~~~~~~~
rmr_ready
...
...
I am unable to resolve this issue. Could any one please help to resolve this.
Thanks,
Luca Lodigiani
Hi All,
I am trying to get this up and running (first time ever).
For anyone interested, just wanted to highlight that if Step 4 fails with the following:
You just need to initialize Helm repositories with the following (disregard the suggestion in the above error output as it's deprecated from Nov 2020):
and then run Step 4.
Anonymous
Hi,
We are trying to compile the pods on Arm and test them on Arm-based platform. We are failing for xapp-onboarder, ric-plt-e, ric-plt-alarmadapter. Has anyone tried it running the o-ran ric on Arm? Could someone point us to the recipe to build the images from the source?
Kiel Friedt
Hi,
I'm in the process of porting the near realtime RIC to ARM. The process has been straight forward up until now. Does anyone have instructions on how all the individual components are built or where they can be sourced outside of the prebuilt images. I have been able to build many items needed to complete the built but still having issues.
Please keep in mind I'm not sure if i built these all correctly since I have not found any instructions. The latest outputs of my effort can be seen here:
Any help would be great. My end goal is to push these changes upstream and to provide CI/CD for ARM images.
Kiel Friedt
I have been able to get all services up except for e2term-alpha, I am getting an error:
I'm not sure what to do next.
Daniel Camps Mur
Hi all,
I deployed the RIC and the SMO in two different VMs and I cannot manage to establish a connection between them to validate the A1 workflows. I suspect the reason is that the RIC VM is not exposing its services to the outside world. Could anyone help me address the issue? For example I would appreciate that if you have a setup like mine you could share the "example_recipe.yaml" you used to deploy the ric platform.
I paste here the outcomes of the services running in ricplt namespace, where in all services it says EXTERNAL_IP = <none>. Please let me know if this is the expected behavior
# kubectl get services -n ricplt
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
aux-entry ClusterIP 10.99.120.87 <none> 80/TCP,443/TCP 3d6h
r4-infrastructure-kong-proxy NodePort 10.100.225.53 <none> 32080:32080/TCP,32443:32443/TCP 3d6h
r4-infrastructure-prometheus-alertmanager ClusterIP 10.105.181.117 <none> 80/TCP 3d6h
r4-infrastructure-prometheus-server ClusterIP 10.108.80.71 <none> 80/TCP 3d6h
service-ricplt-a1mediator-http ClusterIP 10.111.176.180 <none> 10000/TCP 3d6h
service-ricplt-a1mediator-rmr ClusterIP 10.105.1.73 <none> 4561/TCP,4562/TCP 3d6h
service-ricplt-alarmadapter-http ClusterIP 10.104.93.39 <none> 8080/TCP 3d6h
service-ricplt-alarmadapter-rmr ClusterIP 10.99.249.253 <none> 4560/TCP,4561/TCP 3d6h
service-ricplt-appmgr-http ClusterIP 10.101.144.152 <none> 8080/TCP 3d6h
service-ricplt-appmgr-rmr ClusterIP 10.110.71.188 <none> 4561/TCP,4560/TCP 3d6h
service-ricplt-dbaas-tcp ClusterIP None <none> 6379/TCP 3d6h
service-ricplt-e2mgr-http ClusterIP 10.103.232.14 <none> 3800/TCP 3d6h
service-ricplt-e2mgr-rmr ClusterIP 10.105.41.119 <none> 4561/TCP,3801/TCP 3d6h
service-ricplt-e2term-rmr-alpha ClusterIP 10.110.201.91 <none> 4561/TCP,38000/TCP 3d6h
service-ricplt-e2term-sctp-alpha NodePort 10.106.243.219 <none> 36422:32222/SCTP 3d6h
service-ricplt-jaegeradapter-agent ClusterIP 10.102.5.160 <none> 5775/UDP,6831/UDP,6832/UDP 3d6h
service-ricplt-jaegeradapter-collector ClusterIP 10.99.78.42 <none> 14267/TCP,14268/TCP,9411/TCP 3d6h
service-ricplt-jaegeradapter-query ClusterIP 10.100.166.185 <none> 16686/TCP 3d6h
service-ricplt-o1mediator-http ClusterIP 10.101.145.78 <none> 9001/TCP,8080/TCP,3000/TCP 3d6h
service-ricplt-o1mediator-tcp-netconf NodePort 10.110.230.66 <none> 830:30830/TCP 3d6h
service-ricplt-rtmgr-http ClusterIP 10.104.100.69 <none> 3800/TCP 3d6h
service-ricplt-rtmgr-rmr ClusterIP 10.98.40.180 <none> 4561/TCP,4560/TCP 3d6h
service-ricplt-submgr-http ClusterIP None <none> 3800/TCP 3d6h
service-ricplt-submgr-rmr ClusterIP None <none> 4560/TCP,4561/TCP 3d6h
service-ricplt-vespamgr-http ClusterIP 10.102.12.213 <none> 8080/TCP,9095/TCP 3d6h
service-ricplt-xapp-onboarder-http ClusterIP 10.107.167.33 <none> 8888/TCP,8080/TCP 3d6h
This is the detail of the kong service in the RIC VM:
# kubectl describe service r4-infrastructure-kong-proxy -n ricplt
Name: r4-infrastructure-kong-proxy
Namespace: ricplt
Labels: app.kubernetes.io/instance=r4-infrastructure
app.kubernetes.io/managed-by=Tiller
app.kubernetes.io/name=kong
app.kubernetes.io/version=1.4
helm.sh/chart=kong-0.36.6
Annotations: <none>
Selector: app.kubernetes.io/component=app,app.kubernetes.io/instance=r4-infrastructure,app.kubernetes.io/name=kong
Type: NodePort
IP: 10.100.225.53
Port: kong-proxy 32080/TCP
TargetPort: 32080/TCP
NodePort: kong-proxy 32080/TCP
Endpoints: 10.244.0.26:32080
Port: kong-proxy-tls 32443/TCP
TargetPort: 32443/TCP
NodePort: kong-proxy-tls 32443/TCP
Endpoints: 10.244.0.26:32443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
BR
Daniel
James Li
Deploy RIC using Recipe failed:
When running './deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe.yaml' it failed with the following message:
Successfully packaged chart and saved it to: /tmp/ric-common-3.3.2.tgz
Successfully packaged chart and saved it to: /tmp/aux-common-3.0.0.tgz
Successfully packaged chart and saved it to: /tmp/nonrtric-common-2.0.0.tgz
Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
Error: Couldn't load repositories file (/root/.helm/repository/repositories.yaml).
You might need to run `helm init` (or `helm init --client-only` if tiller is already installed)
****************************************************************************************************************
ERROR
****************************************************************************************************************
Can't locate the ric-common helm package in the local repo. Please make sure that it is properly installed.
****************************************************************************************************************
Running `helm init` as suggested doesn't help. Any idea?
Add Comment