# OAIC with srsRAN and O-RAN OSC near Real-Time RIC
Please refer to [OAIC web page](https://openaicellular.github.io/oaic/index.html).
OAIC(Open AI Cellular) uses O-RAN OSC near Real-Time RIC Release E(now).
In this document, we just install "ricplt" of OSC RT-RIC. If you want to install "ricaux" of OSC RT-RIC, please refer to the according documents.
## Getting started
OAIC is an open-source effort led by a consortium of academic institutions to provide fully open-source software architecture, library, and toolset that encompass both the AI controllers (OAIC-C) as well as an AI testing framework (OAIC-T). This project will develop a software infrastructure that spurs research and development on AI-enabled cellular radio networks. We leverage existing open-source 5G software efforts to create a framework which integrates AI controllers into 5G processing blocks and extends the scope of the Open Radio Access Network (O-RAN) framework, the industry standard for future RANs.
Start to use Open AI Cellular, first clone the oaic repository:
```
git clone https://github.com/openaicellular/oaic.git
```
"oaic" is the root directory for all purposes. Different components of OAIC like the Near-real time RIC, srsRAN and xApps are linked as submodules in this directory. Performing an update of the submodules will provide all the files required to run OAIC.
Check out the latest version of every dependent submodule within the “oaic” repository.
```
cd oaic
git submodule update --init --recursive --remote
```
## Install Kubernetes, Docker, and Helm
### RIC Kubernetes Cluster Installation
The RIC-Deployment directory contains the deployment scripts and pre-generated helm charts for each of the RIC components. This repository also contains some “demo” scripts which can be run after complete installation.
```
cd RIC-Deployment/tools/k8s/bin
```
This directory contains tools for generating a simple script that can help us set up a one-node Kubernetes cluster (OSC also supports a 3 node Master slave Kubernetes configuration, but we do not cover that here).
The scripts automatically read in parameters (version specifications, setting up private containers/registries) from the following files:
"k8s/etc/infra.rc": specifies the docker host, Kubernetes, and Kubernetes CNI (Cluster Networking Interfaces) versions. If left unspecified, the default version is installed.
"k8s/etc/env.rc": Normally no change needed for this file. Can specify special/custom Kubernetes Cluster components, such as running private Docker registry with self-signed certificates, or hostnames that can be only resolved via private /etc/hosts entries.
"etc/openstack.rc": (Relevant only for Open Stack VMs) If the Kubernetes cluster is deployed on Open Stack VMs, this file specifies parameters for accessing the APIs of the Open Stack installation.
For a simple installation there is no need to modify any of the above files. The files give flexibility to define our own custom Kubernetes environment if we ever need to. Run the script which will generate the Kubernetes stack install script. Executing the below command will output a shell script called k8s-1node-cloud-init-k_1_16-h_2_17-d_cur.sh. The file name indicates that we are installing Kubernetes v1.16 (k_1_16), Helm v2.17 (h_2_17) and the latest version of docker.
```
./gen-cloud-init.sh
```
Executing the generated script k8s-1node-cloud-init-k_1_16-h_2_17-d_cur.sh will install Kubernetes, Docker and Helm with version specified in the k8s/etc/infra.c. This also installs some pods which help cluster creation, service creation and internetworking between services. Running this script will replace any existing installation of Docker host, Kubernetes, and Helm on the VM. The script will reboot the machine upon successful completion. This will take some time (approx. 15-20 mins).
```
sudo ./k8s-1node-cloud-init-k_1_16-h_2_17-d_cur.sh
```
### Onetime setup for Influxdb
Once Kubernetes setup is done, we have to create PersistentVolume through the storage class for the influxdb database. The following one time process should be followed before deploying the influxdb in ricplt namespace.
Persistent Volume:
First we need to check if the “ricinfra” namespace exists.
```
sudo kubectl get ns ricinfra
```
If the namespace doesn’t exist, then create it using:
```
sudo kubectl create ns ricinfra
```
The next three commands installs the nfs-common package for kubernetes through helm in the “ricinfra” namespace and for the system:
```
sudo helm install stable/nfs-server-provisioner --namespace ricinfra --name nfs-release-1
sudo kubectl patch storageclass nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
sudo apt install nfs-common
```
NFS-common basically allows file sharing between systems residing on a local area network.
### Build Modified E2 docker Image
Pre-requisites:
Local docker registry to host docker images. You can create one using, (You will need “super user” permissions)
```
sudo docker run -d -p 5001:5000 --restart=always --name ric registry:2
```
#### Creating Docker image
Navigate to ric-plt-e2 directory.
```
cd ../../../..
cd ric-plt-e2
```
The code in this repo needs to be packaged as a docker container. We make use of the existing Dockerfile in RIC-E2-TERMINATION to do this. Execute the following commands in the given order
```
cd RIC-E2-TERMINATION
sudo docker build -f Dockerfile -t localhost:5001/ric-plt-e2:5.5.0 .
sudo docker push localhost:5001/ric-plt-e2:5.5.0
cd ../../
```
This image can be used when deploying the near-real time RIC Kubernetes Cluster in the next step.
When the RIC platform is deployed, you will have the modified E2 Termination running on the Kubernetes cluster. The pod will be called deployment-ricplt-e2term-alpha and 3 services related to E2 Termination will be created:
"service-ricplt-e2term-prometheus-alpha" : Communicates with the VES-prometheus Adapter (VESPA) pod to send data to the SMO.
"service-ricplt-e2term-rmr-alpha" : RMR service that manages exchange of messages between E2 Termination other components in the near-real time RIC.
"service-ricplt-e2term-sctp-alpha" : Accepts SCTP connections from RAN and exchanges E2 messages with the RAN. Note that this service is configured as a NodePort (accepts connections external to the cluster) while the other two are configured as ClusterIP (Networking only within the cluster).
### Deploy the near-Real Time RIC
Once the Kubernetes clusters are deployed, it is now time for us to deploy the near-real time RIC cluster.
```
cd RIC-Deployment/bin
sudo ./deploy-ric-platform -f ../RECIPE_EXAMPLE/PLATFORM/example_recipe_oran_e_release_modified_e2.yaml
```
This command deploys the near-real time RIC according to the RECIPE stored in RIC-Deployment/RECIPE_EXAMPLE/PLATFORM/ directory.
## srsRAN with E2 Agent Installation Guide
srsRAN is a 4G/5G software radio suite developed by SRS. This is a modified version of srsRAN 21.10 and POWDER’s E2 agent enabled srsLTE.
See the srsRAN project pages for information, guides and project news.
The srsRAN suite includes:
srsUE - a full-stack SDR 4G/5G-NSA UE application (5G-SA coming soon)
srsENB - a full-stack SDR 4G/5G-NSA eNodeB application (5G-SA coming soon)
srsEPC - a light-weight 4G core network implementation with MME, HSS and S/P-GW
### Dependencies Installation
```
sudo apt-get install build-essential cmake libfftw3-dev libmbedtls-dev libboost-program-options-dev libconfig++-dev libsctp-dev libtool autoconf libboost-system-dev libboost-test-dev libboost-thread-dev libqwt-qt5-dev qtbase5-dev
```
### ZeroMQ Installation
srsRAN software suite includes virtual radios which uses the ZeroMQ networking library to transfer radio samples between applications. This approach is very useful for development, testing, debugging, CI/CD or for teaching and demonstrating. Natively, ZeroMQ with srsRAN supports only one eNB/gNB and one UE configuration but it can be extended to support multiple UEs using GNU Radio.
Package Installation:
```
sudo apt-get install libzmq3-dev
```
### UHD 4.1 Installation
Using package manager:
```
sudo add-apt-repository ppa:ettusresearch/uhd
sudo apt-get update
sudo apt-get install libuhd-dev libuhd4.1.0 uhd-host
```
### asn1c Compiler Installation
We will be using the modified asn1c compiler (for RAN and CN) that is hosted by Open Air Interface (OAI)
```
cd ../..
sudo apt install libtool autoconf
git clone https://gitlab.eurecom.fr/oai/asn1c.git
cd asn1c
git checkout velichkov_s1ap_plus_option_group
autoreconf -iv
./configure
make -j`nproc`
sudo make install
sudo ldconfig
cd ..
```
## srsRAN with E2 agent Installation
Installation from Source:
```
cd srsRAN-e2
mkdir build
export SRS=`realpath .`
cd build
cmake ../ -DCMAKE_BUILD_TYPE=RelWithDebInfo \
-DRIC_GENERATED_E2AP_BINDING_DIR=${SRS}/e2_bindings/E2AP-v01.01 \
-DRIC_GENERATED_E2SM_KPM_BINDING_DIR=${SRS}/e2_bindings/E2SM-KPM \
-DRIC_GENERATED_E2SM_GNB_NRT_BINDING_DIR=${SRS}/e2_bindings/E2SM-GNB-NRT
make -j`nproc`
sudo make install
sudo ldconfig
sudo ./srsran_install_configs.sh user
```
## Setup your own 5G Network
You need to run each compoment in different terminals.
### Running the EPC
Before we start the EPC, we need to create a separate network namespace for the UE since all components are running on the same machine.
```
sudo ip netns add ue1
sudo ip netns list
```
Now, in a command window to run srsRAN EPC:
```
sudo srsepc
```
### en-gNB and UE in ZeroMQ Mode
Before we proceed further it would be worthwhile to open the logs of E2 Manager, E2 Termination, Subscription Manager and Application Manager to trace the flow of messages.
In a new command window to run srsRAN en-gNB.
But before we start the en-gNB, we need to get the current machine’s IP address and the IP address of the E2 Termination service at the near-RT RIC.
```
export E2NODE_IP=`hostname -I | cut -f1 -d' '`
export E2NODE_PORT=5006
export E2TERM_IP=`sudo kubectl get svc -n ricplt --field-selector metadata.name=service-ricplt-e2term-sctp-alpha -o jsonpath='{.items[0].spec.clusterIP}'`
```
If you get the error message "srsenb: error while loading shared libraries: libsrslte_rf.so: cannot open shared object file: No such file or directory", maybe just do this: "After installing the package with sudo make install, try running sudo ldconfig.""
#### srsENB in ZeroMQ mode
```
sudo srsenb --enb.n_prb=50 --enb.name=enb1 --enb.enb_id=0x19B \
--rf.device_name=zmq --rf.device_args="fail_on_disconnect=true,tx_port0=tcp://*:2000,rx_port0=tcp://localhost:2001,tx_port1=tcp://*:2100,rx_port1=tcp://localhost:2101,id=enb,base_srate=23.04e6" \
--ric.agent.remote_ipv4_addr=${E2TERM_IP} --log.all_level=warn --ric.agent.log_level=debug --log.filename=stdout --ric.agent.local_ipv4_addr=${E2NODE_IP} --ric.agent.local_port=${E2NODE_PORT}
```
Once the en-gNB is up and successfully connected to the near-RT RIC, you will see E2 Setup and E2 Response messages on the console. You will also see RIC Connection Initialized and RIC state established messages.
#### srsUE in ZeroMQ mode
This command uses the default config file. The following message RRC NR reconfiguration successful confirms that the UE has connected to the NR cell. This will be used for the data link, while the LTE cell will be used for control messaging. On a new command terminal on the same machine to run:
```
sudo srsue --gw.netns=ue1
```
Once the UE connects successfully to the network, the UE will be assigned an IP.
## Exchanging Traffic
We outline testing the network through ping and iperf.
### PING
This is the simplest way to test the network. This will test whether or not the UE and core can successfully communicate.(Run in another terminal window)
#### Uplink
When using zeroMQ, the ping command should be executed on a new terminal from the UE’s network space
```
sudo ip netns exec ue1 ping 172.16.0.1
```
#### Downlink
For Downlink, on a new terminal run
```
sudo ping 172.16.0.2
```
### iPerf3
In this scenario, client will run on the UE side with the server on the network side (core). UDP traffic will be generated at 10Mbps for 60 seconds. It is important to start the server first, and then the client. Maybe you need to install iperf3 in advanced.
#### Network Side
```
iperf3 -s -i 1
```
#### UE-Side
Again, since we are using zeroMQ, the iperf client should be run from the UE’s network namespace.
```
sudo ip netns exec ue1 iperf3 -c 172.16.0.1 -b 10M -i 1 -t 60
```