You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

This page provides an overview of the processing involved for this use case.

Training Process Overview

The high level training process can be seen in the next three figures.

RAN data will be collected and stored in a data repository training purposes. Training an ML model requires a significant amount of data, and prior to extensive 5G rollout, E2 data will be lacking. Thus, in the Release B timeframe we will use 4G X2 data for training purposes. See section XXXX for the X2 data equivalent to be used in training.

Having collected a data lake sufficient to the task, this data will be divided into two sets, one set used for training the model, the other used for testing the trained model.

Figure 1 – Data Collection and Prep for Initial Training

Having so divided the data, training can commence. A Training Actor will feed into the QoE Predictor Model the data inputs (UeRf)Actual, Serving, (CellPerf)Actual, Serving and (UePerf) Actual, Serving (specifically, PDCP_throughput reported for the UE connected to the serving cell). The QoE Predictor Model in “learning mode” will use those inputs to learn how to predict the 3rd given the former.

Figure 2 – Release B Prediction Training

After training has taken place, the test data set can be used to evaluate the QoE Predictor Model. The Training Actor will exercise the Model as it will be exercised at runtime, feeding into the QoE Predictor Model in “execution mode” only (UeRf)Actual, Serving and (CellPerf)Actual, Serving, receiving in return a prediction for (UePerf) Current, Serving (specifically, PDCP_throughput for the UE connected to the serving cell). The Training Actor will compare this predicted value to the actual (UePerf) Actual, Serving UE PDCP_throughput value to evaluate and score the Model.

Figure 3 – Release B Prediction Evaluation

The Training Actor will report this score in such a way that human intelligence can be used to determine whether the Model is sufficiently performant to be put into the execution environment.

As described earlier, in Release C a second dimension of prediction will be added: temporal prediction. With this dimension additional training will be needed. Specifically the QoE Predictor Model will need to be modified to return a prediction for (UePerf) Future, Serving (specifically, PDCP_throughput for the UE connected to the serving cell time shifted into the future, such as 10 seconds) based on the current values (i.e., 10 seconds earlier) of (UeRf)Actual, Serving and (CellPerf)Actual, Serving.

Figure 4 – Release C Prediction Training

The Training Actor will be extended to provide evaluation a second dimension of prediction: comparing the prediction of UePerf for the serving cell at the end of the prediction window with the actual UePerf measurement for that serving cell at that same time.

Figure 5 – Release C Prediction Evaluation

Execution Time Processing Overview

The following diagram provides the post-deployment processing of the xApps, their interactions with the Near-RT RIC Platform.

Figure 6 – Data Movement Between Functional Components

Note that this organization of functionality into xApps also allows for future extensions. E.g., if another xApp exists that can predict the travel path of a given UE, then the QP xApp could use such travel path information to give a better Predicted: (QoE) Future. The TS xApp could also use such travel path information to determine a “cost” of each neighbor cell as part of intervention handling, so as to minimize future handovers.

  • No labels