Instructions below are no longer accurate. Download the most recent version of Master branch using the link https://gerrit.o-ran-sc.org/r/admin/repos/smo/ves. Installation instructions are available at https://docs.o-ran-sc.org/projects/o-ran-sc-smo-ves/en/latest/installation-guide.html

ves.zip


This zip file supports the VES collector interface in O-RAN. It makes use 
of three containers, the ves-collector container that collects VES events posted by other parts of the O-RAN solution, Grafana, which acts as a dashboard and is used to display
Performance Measurement (PM) data posted by other entities and
InfluxdB which is used to persist the data received by the collector. PREREQUISITES: The prerequisite to use this solution is that you need Docker running on the machine, where you want to run these containers. BUILD: To build the solution, you need to do the following in the collector folder. % make RUN: There are two scripts in the collector folder. A ves-start.sh script which starts the VES collector and other parts. A ves-stop.sh script can be used to stop the collector. Note, the VES collector runs on port 9999 of the machine where this script is launched. The URL for sending a POST of the event would point to: http://<IP address of the host>:9999/eventListener/v7/events


59 Comments

  1. Hi 

    I followed the same steps given above to install VES Collector

    Installation was completed and the containers were Up initially

    Influxdb and grafana containers are healthy

    After few minutes the VES Collector container alone gets exited

    VES Collector container logs

    Why is the VES Collector container getting exited?

  2. Hi 

    Tried debugging it and found an error in monitor.log

    Traceback (most recent call last):
    File "/opt/ves/evel-test-collector/code/collector/monitor.py", line 39, in <module>
    import tzlocal
    ImportError: No module named tzlocal


    tzlocal latest package has no support for python2


    In Dockerfile while installing tzlocal specified the version as 2.1 

    RUN pip install requests pytz tzlocal==2.1


    This sorted out the issue

    1. Thanks Priya. Let me forward it to the VES team.

  3. PRIYADHARSHINI G S, we are using Python 2.7 and Ubuntu 20.x. I was wondering what's the version of your OS and Python. 

    1. Hi Santanu

      I used Python 2.7.17 and Ubuntu 18.04.5

  4. Isn't this a container, where we are specifying which version of Ubuntu should be used?

    1. yes, this error was observed within container.
      Version of ubuntu specified for container in Dockerfile is ubuntu:xenial-->16.04 

      1. Hmm! I wonder why it is not focal (20.04). santanu de?

  5. Hi PRIYADHARSHINI G Sand Preethika Prathaban, it turns out that the Git repository has a more updated image for the VES Collector. Can you check out the latest version of smo/ves and try running with that version?

    1. Hi Mahesh Jethanandani, with latest master from git repository ves-collector runs healthy without any issues.
      Looks like above observed issue pertains to dawn branch alone.

  6. rsd1@rsd1-VirtualBox:~/ORAN-SMO/ves$ sudo make
    cd agent; make
    make[1]: Entering directory '/home/rsd1/ORAN-SMO/ves/agent'
    docker build -t ves-agent .
    Sending build context to Docker daemon 59.9kB
    Step 1/12 : FROM ubuntu:focal
    ---> 54c9d81cbb44
    Step 2/12 : RUN mkdir /opt/ves
    ---> Using cache
    ---> c9bfae43581d
    Step 3/12 : RUN apt-get update && apt-get -y upgrade
    ---> Using cache
    ---> e8b14cf1da6f
    Step 4/12 : RUN apt-get install -y tzdata
    ---> Using cache
    ---> 888f22976973
    Step 5/12 : RUN apt-get install -y netcat
    ---> Using cache
    ---> 6937971fb6ee
    Step 6/12 : RUN apt-get install -y default-jre zookeeperd python3 python3-pip pkg-config git build-essential libpthread-stubs0-dev libssl-dev libsasl2-dev liblz4-dev libz-dev
    ---> Using cache
    ---> 9d0c705d0140
    Step 7/12 : RUN pip3 install kafka-python pyaml
    ---> Using cache
    ---> 82ba515cfd53
    Step 8/12 : RUN pip3 install --upgrade certifi
    ---> Using cache
    ---> 1a8c585c1eb6
    Step 9/12 : RUN mkdir /opt/ves/barometer
    ---> Using cache
    ---> 057ea05fde2e
    Step 10/12 : ADD barometer /opt/ves/barometer
    ---> Using cache
    ---> 524f113f3284
    Step 11/12 : COPY start.sh /opt/ves/start.sh
    ---> Using cache
    ---> 8896fc2f6783
    Step 12/12 : ENTRYPOINT ["/bin/bash", "/opt/ves/start.sh"]
    ---> Using cache
    ---> 7f52eb0ed4f5
    Successfully built 7f52eb0ed4f5
    Successfully tagged ves-agent:latest
    make[1]: Leaving directory '/home/rsd1/ORAN-SMO/ves/agent'
    cd collector; make
    make[1]: Entering directory '/home/rsd1/ORAN-SMO/ves/collector'
    docker build -t ves-collector .
    Sending build context to Docker daemon 213kB
    Step 1/10 : FROM ubuntu:focal
    ---> 54c9d81cbb44
    Step 2/10 : RUN apt-get update && apt-get -y upgrade
    ---> Using cache
    ---> fd645a706dea
    Step 3/10 : RUN apt-get install -y git curl python3 python3-pip
    ---> Using cache
    ---> 98121b904240
    Step 4/10 : RUN pip3 install requests jsonschema elasticsearch kafka-python gevent
    ---> Using cache
    ---> 63876d8cbd04
    Step 5/10 : RUN mkdir -p /opt/ves/certs
    ---> Using cache
    ---> a93f826c2286
    Step 6/10 : RUN mkdir /opt/ves/evel-test-collector
    ---> Using cache
    ---> c69155d4947c
    Step 7/10 : ADD evel-test-collector /opt/ves/evel-test-collector
    ---> Using cache
    ---> 2acc4175b7b3
    Step 8/10 : COPY Dashboard.json /opt/ves/Dashboard.json
    ---> Using cache
    ---> a95dc1f1220b
    Step 9/10 : COPY start.sh /opt/ves/start.sh
    ---> Using cache
    ---> cfdcba7af463
    Step 10/10 : ENTRYPOINT ["/bin/bash", "/opt/ves/start.sh"]
    ---> Using cache
    ---> f14d3306feb6
    Successfully built f14d3306feb6
    Successfully tagged ves-collector:latest
    make[1]: Leaving directory '/home/rsd1/ORAN-SMO/ves/collector'
    cd kafka; make
    make[1]: Entering directory '/home/rsd1/ORAN-SMO/ves/kafka'
    docker build -t ves-kafka .
    Sending build context to Docker daemon 6.656kB
    Step 1/8 : FROM ubuntu:xenial
    ---> b6f507652425
    Step 2/8 : RUN apt-get update && apt-get -y upgrade
    ---> Using cache
    ---> 76444e36efbd
    Step 3/8 : RUN apt-get install -y default-jre python-pip wget
    ---> Using cache
    ---> ce376f30c9e5
    Step 4/8 : RUN pip install kafka-python
    ---> Using cache
    ---> 69906b8b2150
    Step 5/8 : RUN mkdir /opt/ves
    ---> Using cache
    ---> 6b7a5876290f
    Step 6/8 : RUN cd /opt/ves; wget https://archive.apache.org/dist/kafka/0.11.0.2/kafka_2.11-0.11.0.2.tgz; tar -xvzf kafka_2.11-0.11.0.2.tgz; sed -i -- 's/#delete.topic.enable=true/delete.topic.enable=true/' kafka_2.11-0.11.0.2/config/server.properties
    ---> Using cache
    ---> 3469f7f2ecb7
    Step 7/8 : COPY start.sh /opt/ves/start.sh
    ---> Using cache
    ---> fd2cb3d5ebb8
    Step 8/8 : ENTRYPOINT ["/bin/bash", "/opt/ves/start.sh"]
    ---> Using cache
    ---> e72255c36018
    Successfully built e72255c36018
    Successfully tagged ves-kafka:latest
    make[1]: Leaving directory '/home/rsd1/ORAN-SMO/ves/kafka'
    cd dmaapadapter; make
    make[1]: Entering directory '/home/rsd1/ORAN-SMO/ves/dmaapadapter'
    docker build -t ves-dmaap-adapter .
    Sending build context to Docker daemon 26.62kB
    Step 1/9 : FROM ubuntu:focal
    ---> 54c9d81cbb44
    Step 2/9 : RUN apt-get update && apt-get -y upgrade
    ---> Using cache
    ---> fd645a706dea
    Step 3/9 : RUN apt-get install -y git curl python3 python3-pip
    ---> Using cache
    ---> 98121b904240
    Step 4/9 : RUN pip3 install requests jsonschema kafka-python flask confluent-kafka
    ---> Using cache
    ---> ec69c0f5e76f
    Step 5/9 : RUN mkdir /opt/ves
    ---> Using cache
    ---> 96fa2c702426
    Step 6/9 : RUN mkdir /opt/ves/adapter
    ---> Using cache
    ---> f36fa454fac0
    Step 7/9 : ADD adapter /opt/ves/adapter
    ---> Using cache
    ---> ea1fd673d346
    Step 8/9 : COPY start.sh /opt/ves/start.sh
    ---> Using cache
    ---> 1829119d9365
    Step 9/9 : ENTRYPOINT ["/bin/bash", "/opt/ves/start.sh"]
    ---> Using cache
    ---> ad26726ec353
    Successfully built ad26726ec353
    Successfully tagged ves-dmaap-adapter:latest
    make[1]: Leaving directory '/home/rsd1/ORAN-SMO/ves/dmaapadapter'
    cd influxdb-connector; make
    make[1]: Entering directory '/home/rsd1/ORAN-SMO/ves/influxdb-connector'
    docker build -t smo-influxdb-connector .
    Sending build context to Docker daemon 30.72kB
    Step 1/8 : FROM ubuntu:focal
    ---> 54c9d81cbb44
    Step 2/8 : RUN apt-get update && apt-get -y upgrade
    ---> Using cache
    ---> fd645a706dea
    Step 3/8 : RUN apt-get install -y git curl python3 python3-pip
    ---> Using cache
    ---> 98121b904240
    Step 4/8 : RUN pip3 install requests confluent-kafka
    ---> Using cache
    ---> 800a8204dc02
    Step 5/8 : RUN mkdir -p /opt/ves/influxdb-connector
    ---> Using cache
    ---> bcce4976afea
    Step 6/8 : ADD influxdb-connector /opt/ves/influxdb-connector
    ---> Using cache
    ---> 06f0cb1807ab
    Step 7/8 : COPY start.sh /opt/ves/start.sh
    ---> Using cache
    ---> 70847646e15f
    Step 8/8 : ENTRYPOINT ["/bin/bash", "/opt/ves/start.sh"]
    ---> Using cache
    ---> dea8ce0f2cd8
    Successfully built dea8ce0f2cd8
    Successfully tagged smo-influxdb-connector:latest
    make[1]: Leaving directory '/home/rsd1/ORAN-SMO/ves/influxdb-connector'
    cd postconfig; make
    make[1]: Entering directory '/home/rsd1/ORAN-SMO/ves/postconfig'
    docker build -t smo-post-config .
    Sending build context to Docker daemon 38.91kB
    Step 1/8 : FROM ubuntu:focal
    ---> 54c9d81cbb44
    Step 2/8 : RUN apt-get update --fix-missing && apt-get -y upgrade
    ---> Using cache
    ---> 76ac0568e225
    Step 3/8 : RUN apt-get install -y git curl
    ---> Using cache
    ---> c8e4660e6f56
    Step 4/8 : RUN mkdir /opt/ves
    ---> Using cache
    ---> 0df434d0ec7d
    Step 5/8 : RUN mkdir /opt/ves/grafana
    ---> Using cache
    ---> 9687a3f45bb2
    Step 6/8 : ADD grafana /opt/ves/grafana
    ---> Using cache
    ---> 6b08d0c00554
    Step 7/8 : COPY start.sh /opt/ves/start.sh
    ---> Using cache
    ---> 8ce38ab59a46
    Step 8/8 : ENTRYPOINT ["/bin/bash", "/opt/ves/start.sh"]
    ---> Using cache
    ---> ab64626a0c76
    Successfully built ab64626a0c76
    Successfully tagged smo-post-config:latest
    make[1]: Leaving directory '/home/rsd1/ORAN-SMO/ves/postconfig'


    make can success,but no ves-start.sh and ves-stop.sh generated and no ves folder in the /opt/ directory.


    rsd1@rsd1-VirtualBox:~/ORAN-SMO/ves/collector$ pwd
    /home/rsd1/ORAN-SMO/ves/collector
    rsd1@rsd1-VirtualBox:~/ORAN-SMO/ves/collector$ sudo ./start.sh
    [sudo] password for rsd1:
    ./start.sh: line 19: cd: /opt/ves: No such file or directory



    1. Adding santanu deKiran Ambardekarto this thread, who can probably help.

      1. thanks ,does the make need online or offline is fine?

    2. ves-start.sh and ves-stop.sh, those 2 files are located under ".../ves/collector" folder. 

      Execute ves-start.sh file to start all containers and execute ves-stop.sh to stop all containers.


  7. The system needs internet access to be able to download the pre-built containers. But I did not see any errors in the build process. Did you?

    1. yes, no error in the build, but run failed, so need the help what maybe the reason?

  8. Hi 

    We are using Ubuntu 20.04.1 to install the E release smo&ves.

    After i installed the O1 interface successfully, i tested the adding network elements and it worked. The connection status for the device was Connected in GUI.

    Then i tried to install the VES interface, just as Brand had mentioned, there was no error in the build but run failed. In this case, i tried to add network elements again from the GUI, but it failed, the connection status showed "undefined" or "Unmounted".

    Here are some questions:

    1. Will the failure of VES affect the adding network elements for SMO? 
    2. We are using the Ubuntu 20.04.1, which only has the default python3.  Is it ok for E realse installation? (Because i see some discussion above about the ubuntu and python version.)
    1. I am not clear on what you mean "tried to add network elements again from the GUI". Which network elements? What GUI? Are you trying to add network elements over O1 or O1/VES interface. Note, the two are different, and there is no GUI to add network elements on O1/VES interface. You just need to point to the network elements to the IP address of the O1/VES collector.

      Also it is ok to use Ubuntu 20.04 version with python3.

      1. Sorry, it is means add network elements over O1

      2. Hello
        I have the same problem. Installing only the O1 interface works fine, but when I install the VES, I cannot access the O1 interface GUI or the allocated port to the O1.
        I use Ubuntu 20.04 with python3.

        1. Hi Ftmzhra Safaeipour, before we try anything, I want to understand what is exactly that you are trying to do. Are you trying to use the O1 interface to talk to one of the network functions, e.g. CU or DU? If so, for now, do not worry about starting VES. Just start the O1 container by itself.

          1. Mahesh Jethanandani

            Thank you for your response.

            I'm trying to install different parts of SMO to have all of its functionality as a whole. Therefore, it's important to have both VES and O1. 

            Is there any conflict between VES and O1 in the D release that causes this issue?

            1. Hi Ftmzhra Safaeipour, I am not saying that you should not try the whole functionality. What I am trying to help you with is to debug the problem. That can happen if we can isolate the problem.

              So start by bringing up the O1 interface and trying that only. And if that succeeds, try bringing up the VES interface and so on. At every stage make sure the functionality is there.

              1. Mahesh Jethanandani

                Thank you. That's right. I've already done the first step; bringing up only the O1 interface and it works fine. 

                The problem arises when I'm trying to bring up VES. After running "ves-start.sh", O1 pods will exit.


                These are the last lines of my terminal, after running "ves-start.sh"

                1. Hi Ftmzhra Safaeipour, couple of things you are getting mixed up about. The first and the second screenshots belong to the output of O1 (NETCONF) interface, while the last screenshot belongs to O1/VES (Monitoring) interface. The two interfaces are very different. Please do not mix them up. 

                  To focus on just the O1 interface, and the first two screenshots, all you have done is bring up the O1 interface. You have not connected to the GUI (hint, read the README file in the client folder) to verify that you are able to connect to the GUI. Next you need to "mount" a server (hint, read the README in the root folder). Once you have verified that you are able to do that, your O1 interface is functional.

                  We can talk about O1/VES interface after you are successful with the above steps.

                2. Hi Ftmzhra Safaeipour,

                  When we execute ves-start.sh, it internally invokes ves-stop.sh to kill all running containers before moving ahead. Our thought process was that the VES Collector project won't run with other containers.

                  Now we have 3 choices in this scenario.

                  A. Start VES-Collector project at first then run other projects.

                  B. Edit ves-start.sh file of VES-collector and remove reference of ves-stop.sh file. It's a single line change. In this case, ves-start.sh won't terminate any running containers.

                  C. Recently we moved to docker-compose. In that case you won't face such a problem. Our latest code is available at https://gerrit.o-ran-sc.org/r/admin/repos/smo/ves . Please read the README file about how to run the whole stack.


                  1. Hi Ftmzhra Safaeipour ,

                    Does above information solve your problem?

                    1. Hello santanu de

                      Thank you for the information. I was checking whether I could resolve further problems.

                      Method A and B worked. Yet, in the new repo, https://gerrit.o-ran-sc.org/r/admin/repos/smo/ves, there isn't any "ves-start.sh" file. I assumed I could use the "ves-start.sh" in the previous version. Is that OK?

                      There is another problem; I cannot (or don't know how to) access the "Ves Collector" on port 9999. Is there a dashboard on this URL? Can I reach this URL using my browser?

                      Thank you.


                      1. Hi Ftmzhra Safaeipour ,

                        You won't see ves-start.sh/ves-stop.sh in approach C. We moved to docker-compose in the latest release. Therefore you have to use docker-compose build, docker-compose up -d, docker-compose down commands to build, start and stop the stack.

                        We don't have any dashboard running on the 9999 port. You can browse http://[Host IP]:9999/eventListener/v7/events url to check collector is up and running.

                        On the same note, we have Grafana dashboard which is running on the 3000 port i.e http://[host ip]:3000 (login as admin/admin), where you can visualize measurement matrices.

                        1. Hi santanu de,

                          Thank you for your response.

                          I can access Grafana, but the "http://[Host IP]:9999/eventListener/v7/event" is not available. 



                          Are there other requirements in addition to the mentioned ones?

                          Thank you.

                          1. Your URL link says 0.0.0.0:9999/eventListener/v7/event, not the host address where the collector/kafka bus is running.

                          2. Hi Ftmzhra Safaeipour ,

                            Attached screenshots indicate that you are switched to approach C.

                            In that case you have to do one more step i.e install ves-certificates on your host. Currently we moved from http to https.

                            Please visit the READ.md file and look for the "Following steps are required for self-signed certificate" section.

                            Once done, restart the whole stack. Now the collector would be accessible through https://[Host IP]:9999/eventListener/v7/event.

                            Please make sure that whoever (either application or simulator) consuming the collector should use https://...

                            1. Hello santanu de,

                              Thank you for your guidance. Just one more question; where should I put the certificate folder?

                              Should it be in the home directory or the same directory as other certificates are (or any other places)?

                              I've completed all the steps in the READ.me file but I could not access the https://[Host IP]:9999/eventListener/v7/event. 


                              Thank you for your time.

                              1. Hi Ftmzhra Safaeipour ,

                                You can put the "ves-certificate" folder outside the "ves" folder. After that create self-signed certificates as per README file. Let me know if you are facing any challenges.

                                1. Hi santanu de,

                                  I put the "ves-certificate" folder outside and next to the "ves" folder, and created the self-signed certificate. Yet, I could not reach the URL.

                                  Here is the result in firefox and chrome.


                                  Thank you.

                                  1. Hi Ftmzhra Safaeipour ,


                                    Have you re-build and start the ves-collector project after completion of self-signed certificates?


                                    1. Hi santanu de,

                                      I had re-built the project, and once you said I re-built it again. Doing that didn't resolve the issue.

                                      Do I need to specify the "ves-certificate" directoty anywhere in the VES project?

                                      Thank you

                                      1. Hi Ftmzhra Safaeipour ,

                                        Could you attach the smo-collector container using "docker exec -it smo-collector /bin/bash" command?

                                        Next go to "/opt/smo/certs" folder. You suppose to see 3 certificates files inside "certs" folder.

                                        Are you able to see those files? docker-compose copy those 3 files from "ves-certificate" folder to inside smo-collector container.

                                        1. Hi santanu de,

                                          I checked the "/opt/smo/certs" directory and didn't see the certificates. 

                                          I tried to add them manually but I got the internal server error (500) when I ran the tests (_example.sh and sendVesHeartbeat.py in the OAM project), and after re-building the VES project those manually added certificated were removed.

                                          In the docker-compose file, I see this line for the "smo-collector" container:

                                          volumes:
                                          - ~/ves-certificate:/opt/ves/certs

                                          I changed this line to the  "- ~/ves-certificate:/opt/smo/certs", and again I got the internal server error (500). But now I see the certificates in the "/opt/smo/certs" directory.

                                          I got this message in the browser:


                                          Thank you.



                                          1. Hi Ftmzhra Safaeipour ,

                                            Looks like when you pull the smo/ves project at that time latest changes where missing in docker-compose file, that's why certificates were not copied inside desired path of the collector container.

                                            As per the screenshot, now ves-collector is running on https and you can integrate with OAM project.


                                            1. Hi santanu de,

                                              Thank you for the information. 

                                              When I'm trying to integrate VES with OAM, I get the "Internal Server Error (500)"

                                              I've pulled this repo:

                                              "https://gerrit.o-ran-sc.org/r/admin/repos/oam" 

                                              Is it the most updated repo?

                                              Thank you.


                                              1. Hi Ftmzhra Safaeipour ,

                                                Yes we are using https://gerrit.o-ran-sc.org/r/admin/repos/oam repo.

                                                Have you edit config file of the oam repo? You have to edit below 2 lines of the config file as below.

                                                urlVes=https://localhost:9999/eventListener/v7/events
                                                basicAuthVes=user:password

                                                  1. Hi santanu de,

                                                    Thank you for your support.

                                                    Yes, I had updated that file, but I still get the following error:

                                                    HTTP/1.1 500 Internal Server Error
                                                    Content-type: application/json
                                                    Transfer-Encoding: chunked

                                                    Moreover, the "_example.sh" file uses the "sendHeartbeat.sh", which is not available in the directory.

                                                    There is a "sendVesHeartbeat.py" file though.

                                                    When I run it I get this output log:

                                                    ################################################################################
                                                    # send SDN-Controller heartbeat
                                                    /usr/lib/python3/dist-packages/urllib3/connectionpool.py:999: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
                                                    warnings.warn(
                                                    500
                                                    Sending VES "stndDefined" message template failed with code 500.


                                                    Thank you.



                                                    1. Hi Ftmzhra Safaeipour ,

                                                      In that case you have to configure config.yml file and modify below 3 lines.

                                                      url: https://localhost:9999/eventListener/v7/events
                                                      username: user
                                                      password: password

                                                      1. Hi santanu de

                                                        I had updated the config and the config.yaml files and changed the "http" to "https", and user/password.

                                                        Are there other files to be edited?

                                                        Thank you. 

                                                        1. Hi Ftmzhra Safaeipour,


                                                          That's it. Make sure URL of the Ves-collector is correct.


  9. ves-start.sh and ves-stop.sh, those 2 files are located under ".../ves/collector" folder. 

    Execute ves-start.sh file to start all containers and execute ves-stop.sh to stop all containers.

    I re-post my comments here again to keep my comments at top.
    1. in the .../ves/collector folder, only start.sh no start and stop.sh 

      1. I am assuming that you download the attached ves.zip file. One you unzip it, you will see bellow mentioned files under "collector" folder. I am talking about highlighted files to start and stop the collector.

        Here are the steps about how to build and run the collector.

        ------------------------------------------------------------

        BUILD:
        
        To build the solution, you need to do the following in the collector
        folder.
        
        % make
        
        RUN:
        
        There are two scripts in the collector folder. A ves-start.sh script
        which starts the VES collector and other parts. A ves-stop.sh script
        can be used to stop the collector.
        
        Note, the VES collector runs on port 9999 of the machine where this
        script is launched. The URL for sending a POST of the event would
        point to:
        
        http://<IP address of the host>:9999/eventListener/v7/events

        1. Hi Brand Lu  ,

          I hope above information solves your problem.

  10. Hi,

    This is the last part of the output after triggering ./ves-start.sh:

    Binding VES Services to local ip address

    --------------------------------------------------------------------

    Starting influxdb container on Local Port Number 3330. Please wait..

    a0181c551050b179c7e7b9280da6c511f0d38c1d3b3ae9d5000b6140d4d146c7
    Done.

    --------------------------------------------------------------------

    Starting Grafana cotainer on Local port number 8880. Please wait..

    580bffe26db91598f4676baed6da974cbf91a622175632c1abc4197a75c31c8a
    Done.

    --------------------------------------------------------------------


    --------------------------------------------------------------------Starting ves collector container on Local port number 9999. Please wait

    a48514b5cd0e384a668ae1476516899d4363ba98bba08f6f7047177fff529dc8
    Done.


    ves stack summary

    ===================================================================================================================


    ves collector listner port: 9999

    Grafana port: 8880

    To access grafana dashboard paste url http://:8880 in web browser.
    Grafana username/password is admin/admin *** DO NOT CHANGE THE ADMIN PASSWORD, CLICK SKIP OPTION ***


    ===================================================================================================================


    so everything looks good at the first glance, then what I want to do is to trigger:

    curl -i -X POST -d @ok_fault.json --header "Content-Type: application/json" http://127.0.0.1:9999/eventListener/v7/events -k
    curl: (7) Failed to connect to 127.0.0.1 port 9999: Connection refused

    but I can't, docker container ls shows me:

    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

    580bffe26db9 grafana/grafana "/run.sh" 7 minutes ago Up 7 minutes 0.0.0.0:8880->3000/tcp, :::8880->3000/tcp xenodochial_galois

    c57b3d35b160 redis:latest "docker-entrypoint.s…" 7 minutes ago Up 7 minutes 6379/tcp redis-service.1.9kaimhtrqrtn06ombh89kwaav

    b88af81d9493 nats:latest "/nats-server --conf…" 7 minutes ago Up 7 minutes 4222/tcp, 6222/tcp, 8222/tcp nats-service.1.r9gzvhbfczt17ic5jx5fz7nvr

    a0181c551050 influxdb:1.8.5 "/entrypoint.sh infl…" 7 minutes ago Up 7 minutes 0.0.0.0:3330->8086/tcp, :::3330->8086/tcp strange_sutherland


    and I guess here I should also have ves-collector image ? I should do something to start it (except invoking ves-start.sh script) ?

    1. Hi Bartłomiej Błachut,

      Could you execute ves-stop.sh (to stop all running containers) and ves-start.sh to restart ves containers again? 

      Note: ves-stop.sh will kill all running containers. 

  11. santanu de Yeah I did it, but it didn't help, actually in ves-start.sh at the begining ves-stop.sh is executed.

  12. Bartłomiej Błachut True. Can you share the logs of crashed ves-collector container?


  13. santanu de ok, where should they be ? in /opt/ves ? if so I don't have any logs there

  14. Bartłomiej Błachut Strange. There should be logs files inside the collector container's opt/ves folder. 


    Anyway, latest solution is available on gerrit, https://gerrit.o-ran-sc.org/r/admin/repos/smo/ves,general. Use the master branch. We are using docker-compose to build and spin those containers. 

    https://docs.o-ran-sc.org/projects/o-ran-sc-smo-ves/en/latest/overview.html this page will guide you how to install and use the application.

  15. Hi, 

    I just wonder how to install ves-agent and also, is there any guide for installation or setting configuration?