# 6 July 2020 - Daily Report
## Summary
----
### Expected Outcome :
* Find information about ML model deployment in RIC
* Study from O-RAN AI/ML workflow
___
### Outcome :
* Get a better understanding on AI/ML utilization in O-RAN
* Get a better understanding on Acumos AI platform
### Further plan:
* Read Acumos documentation
___
### Daily Log
#### 1. Study AI/ML workflow description and requirement resource from Kevin <mark> 09:00 </mark>
* Write the ML model deployment scenario
* Write about the ML model lifecycle implementation
#### 2. Read acumos paper from Kevin <mark> 14:10 </mark>
---
## Study Notes
### AI/ML workflow description and requirement
::: info
Reference :
* https://www.o-ran.org/s/ORAN-WG2AIMLv0100.pdf
:::

#### Types of Machine Learning Algorithm
##### Supervised learning
In supervised learning the input data is labeled so that each of the data has its corresponding correct value. Supervised learning ias a machine learning task that aims to learn a mapping from the input to the output with a given labeled data set. Some supervised learning model are
1. Regression
2. Instance-based algorithm
3. Decision Tree Algorithm
4. Support Vector Machines
5. Bayesian Algorithm
6. Ensemble Algorithm
Supervised learning can be further grouped into Regression and Classification problem.

The figure above is the training and host location within RIC. ML training host and ML model host/actor can be part of Non-RT RIC or Near-RT RIC.
##### Unsupervised learning
Unsupervised learning do not has label in its input data. It is a machine learning task that aims to learn function to describe a hidden structure from unlabeled data. Some examples of unsupervised learning are K-means clustering and principal component analysis (PCA)

The figure above is the host location in RIC. Same as before, ML training host and ML model/actor can be part of Non-RT RIC or Near-RT RIC.
##### Reinforcement learning
Reinforcement learning is a goal-oriented learning based on interaction with environment. It aims to optimize a long-term objective by interacting with the environments based on a trial and error process. There are several RL algorithms such as
1. Q-learning
2. Multi-armed bandit learning
3. Deep RL

The figure above is the training and host location within RIC. We notice a difference here, the non-RT RIC and near-RT RIC is connected using O1 and A1 interface instead of only A1 interface in the previous learning process. The ML training host and ML model host/actor shall be co-located as part of Non-RT RIC or Near-RT RIC
#### AI/ML functionalities into O-RAN control loops

Picture above shows three different control loops in O-RAN
There are three different control loops in O-RAN which can be assisted using AI/ML. Time scale of O-RAN control loops depend on what is being controlled i.e. system parameters, resources or radio resources management(RRM) algorithm parameters. We can see the control loops with its corresponding time scale.
The locationn of the ML model training and the ML model inference for a use case depend on:
* computation complexity
* availability and quantity of data to be exchanged
* Response time requirements
* type of ML model
There are three locations where the ML model training and the ML model inference can be located that are:
1. Non-RT RIC
2. Near-RT RIC
3. O-DU
**In the first phase of O-RAN, ML model training will be considered in the Non-RT RIC and ML model inference will be considered in loops 2 and 3**. For loop2 the ML inference is running in Near-RT RIC. For loop1, the ML inference is running in an O-DU.
### AI/ML general procedure and interface framework

There are 3 deployment scenarios that are considered for ML architecture/framework in O-RAN architecture are
1. Scenario 1.1: Non-RT RIC acts as both the ML training and inference host
2. Scenario 1.2: Non-RT RIC acts as the ML training host and the Near-RT RIC as the ML inference host
3. Scenario 1.2: Non-RT RIC acts as the ML training host and the O-CU/O-DU RIC as the ML inference host
If the model is a reinforcement learning based, both ML training and inference host should be located in the same place.

### ML Model lifecycle Implementation example

1. ML modeller uses a designer environment along with ML toolkits to create the initial ML model
2. The initial model is sent to training host for training
3. The appropriate data sets are collected from the Near-RT RIC,O-CU and O-DU to a data lake and passed to the ML training host
4. The trained model/sub models are uploaded to the ML designer catalog (Such as Acumos). The final ML model is composed
5. The ML model is published to Non-RT RIC along with the associated license and metadata
6. Non-RT RIC creates a containerized ML application containing the necessary model artifacts (When using acumosAI, the ML model;s container is created in Acumos catalog itself)
7. Non-RT RIC deploys the ML application to the Near-RT RIC, O-DU and O-RU using O1 interface. Policies are also set using A1 interface.
8. PM data is sent back to ML training hosts from Near-RT RIC, O-DU and O-RU for retraining.
### Deployment Scenarios


### Packaging and Sharing Machine Learning Models via the Acumos AI Open Platform
:::info
References :
**Packaging and Sharing Machine Learning Models via the Acumos AI Open Platform**
by Shuai Zhao, Manoop Talasila, Guy Jacobson, Cristian Borcea, Syed Anwar Aftab and John F Murray.
:::
Machine learning has been growing rapidly in the past few years. It has shown its effectiveness in solving variety of practical problem such as disease detection, language translation, etc. However, in practice, it is challenging to integrate ML model into application development environment. There are many company and groups that developed machine learning model for their own specific purposes. This leads to duplicated effort and making reuse impossible.
Acumos is an open platform capable of packaging ML models into portable containerized microservices which can be easily shared via platform's catalog and can be integrated into various business applications. Acumos platform reduces the technical burden on application developers when applying machine learning models to their businses applications. The platform also allows the reuse of readily available ML microservices in various business domain.
#### Acumos Platform Advantages
1. Acumos offers a one-stop convenient deployment service
On acumos, modeler and data scientist can freely design and test their model using readily available data. The model than can be stored in the repository as microservices for further applications. Microservices can be chained together to create a complete usable model and then can be placed in the catalog. Models that has been listed in the catalog are available for public to make use of it.
2. Acumos offers model-level isolation
Real practices need to train multiple models over a single dataset for various tasks. For example, given a set of images, there may be multiple tasks, such as face detection, landmark detection,and mode detection. With Acumos, teams can work **independently** on different problems. Model-level isolation also facilitates the reuse and sharing of models with other similar applications without breaching model privacy.
3. Acumos can help to distribute the robust and runable models from model experts to common end-users
In Acumos, we can treat ML models as black boxes which take welldefined inputs and generate output. End users do not require special knowledge on machine learning.
**Comparison between different platforms**

#### Acumos Design and Process

The main flow is divided into three stages:
1. Uploading
This stages is where the modeler upload their pre-trained model to the platform. Modeler can build their model using various languages and toolkit.
2. Publishing
When the model are uploaded, they are stored in private area where only contributor can access them. The contributor then choose when and how to share their model. The model need to have model's metadata when published. Metadata contains function description, input and output format and model category.
3. Predicting
The acumos platform will pack the uploaded model as a microservice in a docker image, whic is ready to be deployed and perform its function. Docker provides container virtualization and it has faster speed and better agility and portability compared with virtual machines. Consumers can download and directly deploy the Dockerized service to the cloud or any local hardware that supports Docker. Once deployed, users can send input to this running microservice and receive the output via RESTful API.
___
## Conclusion
* In the first phase of O-RAN, ML model training will be considered in the Non-RT RIC and ML model inference will be considered in the Near-RT RIC or O-CU
* There are several AI/ML model deployment in O-RAN
* Acumos is an open platform that help modeler to distribute their model or to get training data set.
* Acumos also help user to get trained model that is ready to be deployed
###### tags: `Daily Report`