# Near RT RIC 安裝步驟 (D-Release)
###### tags: `Construction`
## Prerequisite
Official-Guideline suggestion
* OS: Ubuntu 18.04 LTS (Bionic Beaver)
* CPU(s): 8
* RAM: 32 GB
* Storage: 160 GB
實際操作
* OS : Ubuntu 18.04 LTS (Bionic Beaver)
* CPU(s): 12
* RAM: 32 GB
* Storage: 160GB
Near RT RIC Cluster安裝步驟主要包含
1.Near RT RIC Platform
2.Near RT RIC Applications
## Near RT RIC Platform
安裝相關套件
```shell=
# 更新
sudo apt update
# 安裝 git vim net-tools screen
sudo apt install git vim net-tools screen
# 安裝 helm 2.17.0
download the script from Helm's GitHub repository:
$ cd /tmp
$ curl https://raw.githubusercontent.com/kubernetes/helm
/master/scripts/get > install-helm.sh
Make the script executable
$ chmod u+x install-helm.sh
#Install helm
$ ./install-helm.sh
$ helm version
```

```shell=
# install kubernetes, kubernetes-CNI, helm and dockercd ric-dep/bin
$ sudo -i
$ swapoff -a
$ git clone http://gerrit.o-ran-sc.org/r/it/dep
$ cd dep
$ git submodule update --init --recursive --remote
$ cd tools/k8s/bin
$ vim ../etc/infra.rc
```

安裝相依套件
```shell=
$ ./gen-cloud-init.sh
$ ./k8s-1node-cloud-init-k_1_18-h_2_17-d_cur.sh
```
Tiller 是在Cluster上運行的 helm 命令,它從 helm 接收命令並直接與 Kubernetes API 溝通以執行創建和刪除資源的工作。為了給 Tiller 在Cluster上運行所需的權限,創建一個 Kubernetes 服務帳戶資源。
安裝Tiller
```shell=
#Create the tiller serviceaccount:
$ kubectl -n kube-system create serviceaccount tiller
# bind the tiller serviceaccount to the cluster-admin role:
$ kubectl create clusterrolebinding tiller
--clusterrole cluster-admin --serviceaccount=kube-system:tiller
# Now we can run helm init, which installs Tiller on our cluster,
along with some local housekeeping tasks such as downloading the
stable repo details:
$ helm init --service-account tiller
```
會發生的tiller還是pending的狀態

因為只運行一個 Kubernetes 節點,所以要在主節點上運行 Pod,需要清除主節點,以便它可以運行 Pod
```shell=
$ kubectl taint nodes --all node-role.kubernetes.io/master-
```

Kubernetes 設置完成後,通過influxdb創建 PersistentVolume。在ricplt部署influxdb之前,遵循以下步驟
```shell=
# If the namespace doesn’t exist, then create it using:
% kubectl create ns ricinfra
% helm install stable/nfs-server-provisioner --namespace ricinfra --name nfs-release-1
% kubectl patch storageclass nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
% sudo apt install nfs-common
```

為it/dep複製ric-common helm圖表,並配置helm repo並啟動local helm server
```shell=
$ HELM_HOME=$(helm home)
$ COMMON_CHART_VERSION=$(cat dep/ric-common/Common-Template/helm/ric-common/Chart.yaml | grep version | awk '{print $2}')
$ helm package -d /tmp dep/ric-common/Common-Template/helm/ric-common
$ cp /tmp/ric-common-$COMMON_CHART_VERSION.tgz $HELM_HOME/repository/local/
$ helm repo index $HELM_HOME/repository/local/
$ helm serve >& /dev/null &
$ helm repo remove local
$ helm repo add local http://127.0.0.1:8879/charts
```
Clone the ric-plt/ric-dep git repository
```shell=
$ git clone "https://gerrit.o-ran-sc.org/r/ric-plt/ric-dep"
```
Modify the deployment recipe
```shell=
# ricip should be the ingress controller listening IP for the platform cluster
$ cd ric-dep
$ vi RECIPE_EXAMPLE/example_recipe_oran_dawn_release.yaml
```

更新設定後,使用以下命令部署RIC
```shell=
$ cd bin
$ ./install -f ../RECIPE_EXAMPLE/example_recipe_oran_dawn_release.yaml
```
安裝完後發現 tiller-deploy image 有問題,

```shell=
$ docker images | grep tiller
```
查看docker image

接著去更改tiller-deploy image
```shell=
$ kubectl edit deploy deployment-tiller-ricxapp -n ricinfra
#Type your podname,namespace
image:gcr.io/kubernetes-helm/tiller:v2.12.3
#Edit this line, correct version
image:ghcr.io/helm/tiller:v2.17.0
```

編輯完成後,tiller-deploy的ststus就會running

使用以下命令檢查入口控制器查詢應用程序管理器平台套件的健康狀況
```shell=
$ curl -v http://master:32080/appmgr/ric/v1/health/ready
```

## Near RT RIC Applications
xApp Onboarding using CLI tool called dms_cli
xApp onboarder 提供一個dms_cli的工具,為第三方提供xApp onboarding服務。它使用xApp descriptor和schema file文件,並生成xApp helm圖表
STEP 1 : 安裝python3
STEP 2 : 準備xApp descriptor和schema file文件。xApp descriptor是定義xApp行為的配置文件。schema file是一個JSON架構文件,用於驗證自定義參數。
STEp 3 : 在部署任何xApp之前,必須將Helm圖表載入到此私有Helm repo
```shell=
#Create a local helm repository with a port other than 8080 on host
$ docker run --rm -u 0 -it -d -p 8090:8080 -e DEBUG=1 -e STORAGE=local -e STORAGE_LOCAL_ROOTDIR=/charts -v $(pwd)/charts:/charts chartmuseum/chartmuseum:latest
```
STEP 4 : 設置CLI連接的環境變量
```shell=
#Set CHART_REPO_URL env variable
$ export CHART_REPO_URL=http://0.0.0.0:8090
```
STEP 5 : 安裝dms_cli工具
```shell=
#Git clone appmgr
$ git clone "https://gerrit.o-ran-sc.org/r/ric-plt/appmgr"
#Change dir to xapp_onboarder
$ cd appmgr/xapp_orchestrater/dev/xapp_onboarder
#If pip3 is not installed, install using the following command
$ yum install python3-pip
#In case dms_cli binary is already installed, it can be uninstalled using following command
$ pip3 uninstall xapp_onboarder
#Install xapp_onboarder using following command
$ pip3 install ./
```
STEP 6 : 如果主機用戶是非root用戶,安裝package後,為系統分配權限
```shell=
#Assign relevant permission for non-root user
$ sudo chmod 755 /usr/local/bin/dms_cli
$ sudo chmod -R 755 /usr/local/lib/python3.6
```
STEP 7 : 載入xApp
```shell=
要使用 dms_cli 工具,必須安裝helm3
$ mkdir helm
$ cd helm/
$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ which helm
/usr/local/bin/helm
$ mv /usr/local/bin/helm /usr/local/bin/helm2
$ ./get_helm.sh
Downloading https://get.helm.sh/helm-v3.9.2-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
$ helm version
version.BuildInfo{Version:"v3.9.2", GitCommit:"1addefbfe665c350f4daf868a9adc5600cc064fd", GitTreeState:"clean", GoVersion:"go1.17.12"}
載入hw-go xApp
# Make sure that you have the xapp descriptor config file and the schema file at your local file system
$ git clone "https://gerrit.o-ran-sc.org/r/ric-app/hw-go"
$ cd config
$ dms_cli onboard hw-go/config/config-file.json hw-go/config/schema.json
```
安裝helm3

啟動onboarding

STEP 8:安裝xApp
```shell=
查詢config-file.json前十行資訊,包括名稱、版本、鏡像檔
$ head hw-go/config/config-file.json
安裝xApp
$ dms_cli install hwgo 1.0.0 ricxapp
```
查詢config-file.json及安裝hw-go xapp

發現鏡像檔有錯誤,導致status無法running

要符合D release的版本

```shell=
編輯hw-go配置
$ kubectl edit deploy ricxapp-hwgo -n ricxapp
```
將image: nexus3.o-ran-sc.org:10004/o-ran-sc/ric-app-hwgo:1.1.1
修改成 image: nexus3.o-ran-sc.org:10004/o-ran-sc/ric-app-hwgo:1.0.1

編輯正確後,status就會runnng

```shell=
查看目前部署的Near RT RIC Applications
$ kubectl get pods -A
```

以上就完成Near RT RIC Applications的部署
## Anomaly Detection Use Case
Introduction:
Anomaly Detection (AD), QoE Predictor (QP) and Traffic Steering xApp 是 Near RT RIC 平台中異常檢測的三個元件
Features:
* AD
AD xApp是Near RT RIC 平台中與Traffic Steering xApp溝通的元件,它與InfluxDB連接並不斷獲取最新的UE資料,將預測結果寫入InfluxDB的data method,使用機器學習檢測異常用戶,通過RMR將異常UE發送到Traffic Steering xApp
* QP
它與InfluxDB連接以獲得UE和Cell metrics,當QP從TS獲得預測請求時,QP處理UE ID並從InfluxDB獲取最新的UE and cell metrics,並使用機器學習來預測異常用戶的throughput,將這些預測發送到TS xApp
* TS
當AD異常檢測時,TS xApp將AD中的異常用戶,向QP發送預測請求,並執行handover
Architecture:
在D-release,缺少實現E2SM KPM 2.0.3的E2 Simulator,因此只能從 inFlux DB開始測試用例。AD xApp repo包含inFlux DB的腳本,它來自Viavi E2模擬器的數據

接著開始部署
dms_cli部署xapp,會發生以下錯誤

CLI 連接設置環境變數
```shell=
# CLI 連接設置環境變數
$ docker run --rm -u 0 -it -d -p 8090:8080 -e DEBUG=1 -e STORAGE=local -e STORAGE_LOCAL_ROOTDIR=/charts -v $(pwd)/charts:/charts chartmuseum/chartmuseum:latest
$ export CHART_REPO_URL=http://0.0.0.0:8090
```
Onboard & install traffic steering xapp
```shell=
$ git clone "https://gerrit.o-ran-sc.org/r/ric-app/ts"
$ dms_cli onboard ts/xapp-descriptor/config-file.json ts/xapp-descriptor/schema.json
$ head head ts/xapp-descriptor/config-file.json
$ dms_cli install trafficxapp 1.2.4 ricxapp
```
確認traffic steering安裝完成

Onboard & install ad xApp
```shell=
$ git clone "https://gerrit.o-ran-sc.org/r/ric-app/ad"
$ dms_cli onboard ad/config/config-file.json ad/descriptors/schema.json
$ head ad/config/config-file.json
$ dms_cli install ad 0.0.2 ricxapp
```
確認ad xapp安裝完成

Onboard & install qp xApp
一些xApp沒有包含schema_file,可以在embedded-schema.json文件中並使用它來Onboard

```shell=
$ git clone "https://gerrit.o-ran-sc.org/r/ric-app/qp"
$ dms_cli onboard qp/config/config-file.json appmgr/xapp_orchestrater/dev/docs/xapp_onboarder/guide/embedded-schema.json
$ head qp/config/config-file.json
$ dms_cli install qp 0.0.4 ricxapp
```
出現錯誤Install fails with error create: failed to create: Secret "sh.helm.release.v1.qp.v1" is invalid:data:Too long:must have at most 1048576 bytes
kubernetes默認情況下是base64編碼的。但是在解碼之後,仍然沒有得到數據,為了解碼數據,進行下列動作
base64 解碼 - Kubernetes 秘密編碼
base64 解碼(再次) - Helm 編碼
gzip 解壓 - Helm 壓縮

```shell=
$ kubectl get secrets sh.helm.release.v1.my-nginx.v1 -o json | jq .data.release | tr -d '"' | base64 -d | base64 -d | gzip -d
$ dms_cli install qp 0.0.4 ricxapp
```

編輯qp鏡像檔的版本
```shell=
$ kubectl edit deploy ricxapp-qp -n ricxapp
```
將image: nexus3.o-ran-sc.org:10002/o-ran-sc/ric-app-qp:0.0.4
修改成image: nexus3.o-ran-sc.org:10002/o-ran-sc/ric-app-qp:0.0.3

確認qp xapp安裝完成

列出helm repo的圖表
```shell=
curl -X GET http://localhost:8090/api/charts | jq
```

目前已經安裝好的pods

## Traffic Steering Use Case

在流程圖中,有4個xApp(KPIMON, QPd, QP, TS)並說明其功能:
* KPIMON: 以 GO 撰寫, 負責向 E2Term 訂閱資訊, 存入 SDL
* QPd (QP driver): 讀取 SDL 的數值, 將數值給予 QP (後續實作已併入 QP)
* QP (QoS Predictor): 根據 TS 的需求, 預估目標 UE 在附近 gNB 可得到的 UL/DL 資源
* TS (Traffic Steering): 進行換手的決策 (寫入 log 中)
除了以上4個不同的 xApp 之外,還可以看到 4 種不同資料流程
* KPIMON-E2Term-E2Sim: E2 Subscription 流程
* KPIMON-SDL-TS/QPd: SDL 的資料交換
* TS-QPd-QP: 透過 RMR 進行的資料交換
* A1Med-TS: Restful API 進行 xApp Policy 的設定
可參考以下連結:
[ Traffic Steering xAPP](https://gerrit.o-ran-sc.org/r/gitweb?p=ric-app/ts.git;a=blob;f=docs/user-guide.rst;h=72525cf210461ab0395e1cfed09606c3f9ef019b;hb=refs/heads/master)
刪除helm部署
```shell=
$ helm delete xappkpimon --namespace ricxapp
```
重新部署kubernetes環境
```shell=
$ kubeadm reset
```