...
Once the kubernetes-participant is set up, the tosca tosca template can be commissioned. After that, the control loop can be instantiated using the steps described in the sub-section Commission/Instantiate control loop via GUI. Once the control loop is in RUNNING state, check that all three micro-services have been created in the nonrtric namespace.
...
- The first step is to clone the nonrtric repo and start the DmaaP message-router. Then, two topics are created in the message-router: POLICY-CLRUNTIME-PARTICIPANT (to be used by control loop controlloop-runtime component of policy/clamp) and unauthenticated.SEC_FAULT_OUTPUT (for handling fault notification events).
...
Code Block | ||
---|---|---|
| ||
cd nonrtric/docker-compose/docker-compose-policy-framework docker-compose down cd nonrtric/test/usecases/oruclosedlooprecovery/apexpolicyversion/LinkMonitor/docker-compose-controlloop docker-compose down docker stop sdnr-sim docker rm sdnr-sim docker volume rm docker-compose-policy-framework_db-vol |
b) Control loop for script version
This sub-section describes the steps for running the control loop for script version of the usecase using docker. This version of the control loop will bring up four micro-services in the nonrtric namespace: oru-app (running the actual logic of the usecase), message-generator (sending the LinkFailure messages at random intervals), sdnr-simulator (for receiving the REST calls made by oru-app), and dmaap-mr (a message-router stub where the LinkFailure messages will be sent).
NOTE:The below instructions refer to bringing up the micro-services in a minikube cluster on the host machine, and it is assumed that the minikube is already up and running. The instructions should be modified accordingly when using a different environment.
- The first step is to clone the nonrtric repo and start the DmaaP message-router. Then, a topic named POLICY-CLRUNTIME-PARTICIPANT is created in the message-router (to be used by controlloop-runtime component of policy/clamp).
Code Block | ||
---|---|---|
| ||
git clone "https://gerrit.o-ran-sc.org/r/nonrtric"
git checkout e-release --track origin/e-release
cd nonrtric/test/auto-test
./startMR.sh remote docker --env-file ../common/test_env-oran-e-release.sh
docker rename message-router onap-dmaap
curl -X POST -H "Content-Type: application/json" -d "{"topicName": "POLICY-CLRUNTIME-PARTICIPANT"}" http://localhost:3904/events/POLICY-CLRUNTIME-PARTICIPANT |
- Build a docker image for each of the four micro-services and make it available for use inside the minikube. Open a new terminal window (keep it separate and do not run any other commands except the ones given below) and run the following commands:
Code Block | ||
---|---|---|
| ||
eval $(minikube docker-env)
cd nonrtric/test/usecases/oruclosedlooprecovery/scriptversion/app
docker build -t oru-app .
cd nonrtric/test/usecases/oruclosedlooprecovery/scriptversion/simulators
docker build -f Dockerfile-sdnr-sim -t sdnr-simulator .
docker build -f Dockerfile-message-generator -t message-generator:v2 .
cd nonrtric/test/mrstub/
docker build -t mrstub .
docker images |
Make sure that all four docker images have been successfully created.
- Next step is to prepare the kube config file of minikube for mounting it inside the k8s-participant component of policy/clamp. First of all, copy the kube config file inside the config directory used by docker-compose file that runs k8s-participant.
Code Block | ||
---|---|---|
| ||
cd nonrtric/test/usecases/oruclosedlooprecovery/scriptversion/docker-compose-controlloop
cp ~/.kube/config ./config/kube-config |
Open the copied kube-config file by running "vi ./config/kube-config" and make the following changes:
- replace everything under "cluster" with these two lines:
server: https://host.docker.internal:<PORT>
insecure-skip-tls-verify: true
- replace <PORT> with the port in original kube-config file before doing the above step
- replace last two lines in the file with:
client-certificate: /home/policy/.minikube/profiles/minikube/client.crt
client-key: /home/policy/.minikube/profiles/minikube/client.key
- Open the docker-compose file by running "vi docker-compose.yml" and replace the last line under volumes of k8s-participant with these two lines:
- ./config/kube-config:/home/policy/.kube/config:ro
- ~/.minikube/profiles/minikube:/home/policy/.minikube/profiles/minikube
- Start all the components using this docker-compose file:
Code Block | ||
---|---|---|
| ||
docker-compose up -d |
Check the logs of k8s-participant using the command "docker logs -f k8s-participant" and wait until these messages start appearing in the logs:
"com.att.nsa.apiClient.http.HttpClient : --> HTTP/1.1 200 OK"
- Once all the components get up and running, the control loop can be commissioned and instantiated. This can be done by making a REST call to the controlloop-runtime component of the clamp. The tosca template for commissioning and the instantiation payload are provided in this directory of the nonrtric repo:
Code Block | ||
---|---|---|
| ||
cd nonrtric/test/usecases/oruclosedlooprecovery/scriptversion/controlloop-rest-payloads |
Commission the tosca template using this REST call:
Code Block | ||
---|---|---|
| ||
curl -X POST -k -u 'healthcheck:zb!XztG34' -H Content-Type:application/yaml https://localhost:6969/onap/controlloop/v2/commission/ --data-binary @commission.yaml |
It should give the following response:
{"errorDetails":null,"affectedControlLoopDefinitions":[{"name":"org.onap.domain.linkmonitor.LinkMonitorControlLoopDefinition1","version":"1.2.3"},{"name":"org.onap.k8s.controlloop.K8SControlLoopParticipant","version":"2.3.4"},{"name":"org.onap.domain.linkmonitor.OruAppK8SMicroserviceControlLoopElement","version":"1.2.3"},{"name":"org.onap.domain.linkmonitor.MessageGeneratorK8SMicroserviceControlLoopElement","version":"1.2.3"},{"name":"org.onap.domain.linkmonitor.SdnrSimulatorK8SMicroserviceControlLoopElement","version":"1.2.3"},{"name":"org.onap.domain.linkmonitor.DmaapMrK8SMicroserviceControlLoopElement","version":"1.2.3"}]}
Make the following REST call to instantiate the control loop:
Code Block | ||
---|---|---|
| ||
curl -X POST -k -u 'healthcheck:zb!XztG34' -H Content-Type:application/json https://localhost:6969/onap/controlloop/v2/instantiation/ --data-binary @instantiation.json |
It should give the following response:
{"errorDetails":null,"affectedControlLoops":[{"name":"LinkMonitorInstance1","version":"1.0.1"}]}
Change the control loop from default UNINITIALISED state to PASSIVE using the following REST call:
Code Block | ||
---|---|---|
| ||
curl -X PUT -k -u 'healthcheck:zb!XztG34' -H Content-Type:application/json https://localhost:6969/onap/controlloop/v2/instantiation/command/ --data-binary @instantiation-command.json |
It should give the same response as above.
Next step is to change the control loop from PASSIVE to RUNNING state. Edit the "instantiation-command.json" file and replace PASSIVE with RUNNING. Making the above REST call once again will change the control loop to RUNNING state.
Once the control loop is in RUNNING state, check that all four micro-services have been created in the nonrtric namespace.
Code Block language bash kubectl -n nonrtric get pod
In order to test the correct working of the usecase, check logs in each of the four components. There should be messages flowing in this order:
message-generator → dmaap-mr → oru-app → sdnr-simulator
- In order to stop the docker containers and free up resources on the host machine, use the following commands:
Code Block | ||
---|---|---|
| ||
cd nonrtric/test/usecases/oruclosedlooprecovery/scriptversion/docker-compose-controlloop
docker-compose down
docker volume rm docker-compose-controlloop_db-vol |