# <center>O-RAN Near-Real Time RIC Installation Guide</center>
# Pre-requisites
## System Requirements
OS: Ubuntu 20.04 LTS (Bionic Beaver)
CPU(s): 2-4 vCPUs
RAM: 6-16 GB
Storage: 20-160 GB
# Near-Real Time RIC Installation Procedure
## Step 1: Install and configure an Ubuntu Host Machine/ Virtual Machine (VM)
The near-real time RIC can be run on a host machine or a VM as per your preference (A VM is recommended if your system is powerful enough to support multiple VMs).
In this instruction set we assume the VM/Linux host system is already configured with the specified system requirements.
## Step 2: Install Kubernetes, Docker, and Helm
### Near-Real Time RIC
To use Open AI Cellular, first clone the oaic repository:
```bash
git clone https://github.com/openaicellular/oaic.git
```
`oaic` is the root directory for all purposes. Different components of OAIC like the Near-real time RIC, srsRAN and xApps are linked as submodules in this directory. Performing an update of the submodules will provide all the files required to run OAIC.
Check out the latest version of every dependent submodule within the “oaic” repository.
```bash
cd oaic
git submodule update --init --recursive --remote
```
### RIC Kubernetes Cluster Installation
The RIC-Deployment directory contains the deployment scripts and pre generated helm charts for each of the RIC components. This repository also contains some “demo” scripts which can be run after complete installation.
```bash
cd RIC-Deployment/tools/k8s/bin
```
Run the script which will generate the Kubernetes stack install script.
```bash
./gen-cloud-init.sh
```
Executing this command will output a shell script called `k8s-1node-cloud-init-k_1_16-h_2_17-d_cur.sh`. The file name indicates that we are installing Kubernetes v1.16 (k_1_16), Helm v2.17 (h_2_17) and the latest version of docker (d_cur).
Executing the generated script `k8s-1node-cloud-init-k_1_16-h_2_17-d_cur.sh` will install Kubernetes, Docker and Helm with version specified in the k8s/etc/infra.c. This also installs some pods which help cluster creation, service creation and internetworking between services. Running this script will replace any existing installation of Docker host, Kubernetes, and Helm on the VM. The script will reboot the machine upon successful completion. This will take some time (approx. 15-20 mins).
```bash
sudo ./k8s-1node-cloud-init-k_1_16-h_2_17-d_cur.sh
```
Once the machine is back up, check if all the pods in the newly installed Kubernetes Cluster are in “Running” state using,
```bash
sudo kubectl get pods -A
```
There should be a total of 9 pods up and running in the cluster.
### Onetime setup for Influxdb
Once Kubernetes setup is done, we have to create PersistentVolume through the storage class for the influxdb database. The following one time process should be followed before deploying the influxdb in ricplt namespace.
**Persistent Volume:**
First we need to check if the “ricinfra” namespace exists.
```bash
sudo kubectl get ns ricinfra
```
If the namespace doesn’t exist, then create it using:
```bash
sudo kubectl create ns ricinfra
```
The next three commands installs the nfs-common package for kubernetes through helm in the “ricinfra” namespace and for the system
```bash
sudo helm install stable/nfs-server-provisioner --namespace ricinfra --name nfs-release-1
sudo kubectl patch storageclass nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
sudo apt install nfs-common
```
NFS-common basically allows file sharing between systems residing on a local area network.
==**Note:**==
==When the RIC platform is undeployed, the ricinfra namespace will also be removed. So, you will need to run the one-time setup procedure again when re-deploying the RIC.==
## Step 3: Build Modified E2 docker Image
### Pre-requisites
Local docker registry to host docker images. You can create one using, (You will need “super user” permissions)
```bash
sudo docker run -d -p 5001:5000 --restart=always --name ric registry:2
```
Now you can either push or pull images using `docker push localhost:5001/<image_name>:<image_tag>` or `docker pull localhost:5001/<image_name>:<image_tag>`
### Creating Docker image
Navigate to ric-plt-e2 directory.
```bash
cd ../../../..
cd ric-plt-e2
```
The code in this repo needs to be packaged as a docker container. We make use of the existing Dockerfile in RIC-E2-TERMINATION to do this. Execute the following commands in the given order
```bash
cd RIC-E2-TERMINATION
sudo docker build -f Dockerfile -t localhost:5001/ric-plt-e2:5.5.0 .
sudo docker push localhost:5001/ric-plt-e2:5.5.0
cd ../../
```
This image can be used when deploying the near-real time RIC Kubernetes Cluster in the next step.
### Commands related to E2 Termination
View E2 Termination logs: `kubectl logs -f -n ricplt -l app=ricplt-e2term-alpha`
View E2 Manager Logs: `kubectl logs -f -n ricplt -l app=ricplt-e2mgr`
Get the IP *service-ricplt-e2term-sctp-alpha*: `kubectl get svc -n ricplt --field-selector metadata.name=service-ricplt-e2term-sctp-alpha -o jsonpath='{.items[0].spec.clusterIP}'`
## Step 4: Deploy the near-Real Time RIC
Once the Kubernetes clusters are deployed, it is time for us to deploy the near-real time RIC cluster.
```bash
cd RIC-Deployment/bin
sudo ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe_oran_e_release_modified_e2.yaml
```
This command deploys the near-real time RIC according to the RECIPE stored in `RIC-Deployment/RECIPE_EXAMPLE/PLATFORM/ directory`.
### Structure of the “RIC-Deployment” Directory
The scripts in the `./bin directory` are one-click RIC deployment/undeployment scripts and will call the deployment/undeployment scripts in the corresponding submodule directory respectively. In each of the submodule directories, `./bin` contains the binary and script files and `./helm` contains the helm charts. For the rest of the non-submodule directories please refer to the README.md files in them for more details.