Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

(Result/Proof point (column A: enter Met/Unmet; Column B: enter relevant URLs/comments)


O-DU High
CriteriaResult / Proof point 

Working build system

If the software produced by the project requires building for use, the project MUST provide a working build system that can automatically rebuild the software from source code. MetDocker images can be found at https://nexus3.o-ran-sc.org
It is SUGGESTED that common tools be used for building the software.MetDocker images can be found at https://nexus3.o-ran-sc.org
The project SHOULD be buildable using only FLOSS tools.MetDocker images can be found at https://nexus3.o-ran-sc.org

Automated test suite

The project MUST use at least one automated test suite that is publicly released as FLOSS (this test suite may be maintained as a separate FLOSS project).
Unmet
MetOne automated test suite is available with UE Attach call flow.
A test suite SHOULD be invocable in a standard way for that language.
For example, "make check", "mvn test", or "rake test" (Ruby).
MetThe automation test suite is invocable through the shell script.
Unmet
It is SUGGESTED that the test suite cover most (or ideally all) the code branches, input fields, and functionality.
Unmet
MetCovers UE Attach call flow.
It is SUGGESTED that the project implement continuous integration (where new or changed code is frequently integrated into a central code repository and automated tests are run on the result).
Unmet
Partially metNew Jenkins job for the automation test suite to be added.

New functionality testing

The project MUST have a general policy (formal or not) that as major new functionality is added to the software produced by the project, tests of that functionality should be added to an automated test suite. 
As long as a policy is in place, even by word of mouth, that says developers should add tests to the automated test suite for major new functionality, select "Met.
Partially metO-DU High code currently tested using test stubs which trigger various scenario in a sequence
The project MUST have evidence that the test_policy for adding tests has been adhered to in the most recent major changes to the software produced by the project.
Major functionality would typically be mentioned in the release notes. Perfection is not required, merely evidence that tests are typically being added in practice to the automated test suite when new major functionality is added to the software produced by the project
.
.Met

Stub-based testing for UE Attach call flow.

Can be extended to test further features.

Unmet

It is SUGGESTED that this policy on adding tests (see test_policy) be documented in the instructions for change proposals. 
However, even an informal rule is acceptable as long as the tests are being added in practice.
MetSteps to run test stub are available under file "l2/docs/README"
""l
Unmet

Warning flags

The project MUST enable one or more compiler warning flags, a "safe" language mode, or use a separate "linter" tool to look for code quality errors or common simple mistakes, if there is at least one FLOSS tool that can implement this criterion in the selected language.Partially metUnable to add as a compiler flag since all existing warnings are a result of ASN tool generated code
The project MUST address warnings.MetThe only warnings seen are from the code generated using ASN tool 

It is SUGGESTED that projects be maximally strict with warnings in the software produced by the project, where practical.

Some warnings cannot be effectively enabled on some projects. What is needed is evidence that the project is striving to enable warning flags where it can, so that errors are detected early.

MetThe only warnings seen are from the code generated using ASN tool 

Security (16 Points) 

(Result/Proof point (column A: enter Met/Unmet; Column B: enter relevant URLs/comments)

...


O-DU High

Result / Proof point 

Static code analysis

At least one static code analysis tool (beyond compiler warnings and "safe" language modes) MUST be applied to any proposed major production release of the software before its release, if there is at least one FLOSS tool that implements this criterion in the selected language.Unmet
It is SUGGESTED that at least one of the static analysis tools used for the static_analysis criterion include rules or approaches to look for common vulnerabilities in the analyzed language or environment.Unmet
All medium and higher severity exploitable vulnerabilities discovered with static code analysis MUST be fixed in a timely way after they are confirmed. Unmet
It is SUGGESTED that static source code analysis occur on every commit or at least daily.Unmet

Dynamic code analysis

It is SUGGESTED that at least one dynamic analysis tool be applied to any proposed major production release of the software before its release.MetValgrind being used by developers
It is SUGGESTED that if the software produced by the project includes software written using a memory-unsafe language (e.g., C or C++), then at least one dynamic tool (e.g., a fuzzer or web application scanner) be routinely used in combination with a mechanism to detect memory safety problems such as buffer overwrites. If the project does not produce software written in a memory-unsafe language, choose "not applicable" (N/A).UnmetMetValgrind not automatedDynamic code analysis through Valgrind analyzer.
It is SUGGESTED that the software produced by the project include many run-time assertions that are checked during dynamic analysis.MetValgrind being used by developers
All medium and higher severity exploitable vulnerabilities discovered with dynamic code analysis MUST be fixed in a timely way after they are confirmed.MetTests using valgrindValgrind