# SMO Deployment
2022/09/29
## Environment
Use OSC Release F package
Ubuntu 20.4
MicroK8s 1.22
no need to switch to root account to install SMO package, can just use sudo to install
After the installation, you can use [F Release Docker Image List](https://wiki.o-ran-sc.org/display/IAT/F+Release+Docker+Image+List) to check(compare) the pod lists that are installed.
## SMO Introduction (data from OSC)
The SMO supports a number of interfaces. These include
O1
O1/VES
O2
A1
R1
### O1 Interface
The O1 interface supports the NETCONF protocol to configure and manage the network elements in the O-RAN solution. These network elements include Near RT-RIC, O-CU, O-DU and O-RU. The SMO uses data models to drive the configuration and management of the network elements. For an example of how the SMO (NETCONF client) interacts with the RIC, CU, DU and RU (each of which are NETCONF servers), see diagram below. The implementation is based on the NETCONF implementation of OpenDayLight (ODL), and User Interface (UI) is based on ODL Community GUI (DLUX).

NETCONF Client/Server interaction
The SMO can offer REST APIs that can be used to drive the configuration on the RIC, CU, DU and the RU.
### SMO and App Onboarding
#### Introduction
One of the purposes of the SMO is to onboard applications, whether they are rApps running on non-RT RIC, or xApps running on near-RT RIC. After they are onboarded, the SMO needs to keep an application package catalog for what applications are available for the operator to deploy or create instances of.
To be able to onboard those applications, the SMO needs to be able to understand how the application is packaged. Details of that are discussed below. Following that is a discussion on around what an application package catalog is supposed to expose so that an operator can trigger a deployment of an application.
#### Application Package Schema
The SMO project is trying to define the schema for the package. For details on the proposal and the comments on the proposal, see the this link. The proposal follows the package schema defined by ETSI NFV SOL 004 that defines the schema for packaging VNF Descriptors (VNFD) for both TOSCA and YANG data model definitions. The idea is to build on the package definition, and use it for application packaging.
### O1/VES Interface
The O1/VES interface supports the monitoring side of SMO. The diagram below shows how the Network Elements interact with the O1/VES interface in the SMO.

Another view of the same can be seen in the diagram below. In this case the events are picked up by the VES Agents which formats them in the form of a VES Event and sends it towards the VES Collector. The VES Collector stores the events in InfluxdB and alternatively to the Elasticsearch engine and/or the Kafka bus. The event data can then be picked up by Grafana or any other application to perform any analysis on the data.

## Steps
Please refer to [OSC Gerrit](https://gerrit.o-ran-sc.org/r/gitweb?p=it/dep.git;a=tree;f=smo-install;hb=HEAD) page, and check the content of "readme" file.
Steps are below:
1. clone the git
2. setup microk8s
3. setup charts-museum
4. setup helm3
5. build-all-charts
6. install oran
7. Verify pods
8. install-simulators
9. upgrade-simulators
10. install-cicd
11. setup-tests-env

at least 6 Core/32GB RAM/150GB HDD
### clone the git
You can clone the git from:
O-RAN SC Gerrit: https://gerrit.o-ran-sc.org/r/it/dep
```
git clone --recursive "https://gerrit.o-ran-sc.org/r/it/dep"
```

Please use Files manager to navigate the folder structure.
61 ## Folder Structure
62 The user entry point is located in the <strong>scripts</strong> folder
63
64 ```
65 .
66 ├── cnf <-- CNF packages that can be deployed by ONAP (Work In Progress, so not yet well documented)
67 │ └── du-ru-simulators <--- The CNF package containing DU/RU/Topology server simulators
68 ├── helm-override <-- The Configuration of the different HELM charts used in SMO package
69 │ ├── network-simulators-override.yaml <--- Standard config for the network simulators
70 │ ├── network-simulators-topology-override.yaml <--- Network simulator topology example that can be changed
71 │ ├── onap-override-cnf.yaml <--- A medium ONAP config ready for CNF deployment
72 │ ├── onap-override.yaml <--- A minimal ONAP config for SMO package
73 │ └── oran-override.yaml <--- A minimal ORAN config for SMO package
74 ├── LICENSE
75 ├── multicloud-k8s <-- Git SUBMODULE required for KUD installation
76 ├── onap_oom <-- Git SUBMODULE required for ONAP installation
77 ├── oran_oom <-- ORAN Charts
78 │ ├── a1controller
79 │ ├── a1simulator
80 │ ├── aux-common
81 │ ├── controlpanel
82 │ ├── dist
83 │ ├── dmaapadapterservice
84 │ ├── du-simulator
85 │ ├── enrichmentservice
86 │ ├── Makefile <-- ORAN Makefile to build all ORAN Charts
87 │ ├── nonrtric
88 │ ├── nonrtric-common
89 │ ├── nonrtricgateway
90 │ ├── oru-app
91 │ ├── policymanagementservice
92 │ ├── rappcatalogueservice
93 │ ├── ric-common
94 │ ├── ru-du-simulators
95 │ ├── ru-simulator
96 │ ├── topology
97 │ └── topology-server
98 ├── README.md
99 ├── scripts <-- All installation scripts (USER ENTRY POINT)
100 │ ├── layer-0 <--- Scripts to setup Node
101 │ │ ├── 0-setup-charts-museum.sh <--- Setup ChartMuseum
102 │ │ └── 0-setup-kud-node.sh <--- Setup K8S node with ONAP Multicloud KUD installation
103 │ │ └── 0-setup-microk8s.sh <--- Setup K8S node with MicroK8S installation
104 │ │ └── 0-setup-helm3.sh <--- Setup HELM3
105 │ │ └── 0-setup-tests-env.sh <--- Setup Python SDK tools
106 │ ├── layer-1 <--- Scripts to prepare for the SMO installation
107 │ │ └── 1-build-all-charts.sh <--- Build all HELM charts and upload them to ChartMuseum
108 │ ├── layer-2 <--- Scripts to install SMO package
109 │ │ ├── 2-install-nonrtric-only.sh <--- Install SMO NONRTRIC k8s namespace only
110 │ │ ├── 2-install-oran-cnf.sh <--- Install SMO full with ONAP CNF features
111 │ │ ├── 2-install-oran.sh <--- Install SMO minimal
112 │ │ └── 2-install-simulators.sh <--- Install Network simulator (RU/DU/Topology Server)
113 │ │ └── 2-upgrade-simulators.sh <--- Upgrade the simulators install at runtime when changes are done on override files
114 │ ├── sub-scripts <--- Sub-Scripts used by the main layer-0, layer-1, layer-2
115 │ │ ├── clean-up.sh
116 │ │ ├── install-nonrtric.sh
117 │ │ ├── install-onap.sh
118 │ │ ├── install-simulators.sh
119 │ │ ├── uninstall-nonrtric.sh
120 │ │ ├── uninstall-onap.sh
121 │ │ └── uninstall-simulators.sh
122 │ └── uninstall-all.sh <--- Uninstall ALL SMO K8S namespaces and cleanup K8S
123 └── test <-- Scripts to test the SMO installation (Work In Progress, so not yet well documented)
124 ├── a1-validation <--- Test nonrtric A1 interface (https://wiki.o-ran-sc.org/display/RICNR/Testing+End+to+End+call+in+release+D)
125 │ ├── data
126 │ ├── subscripts
127 │ └── validate-a1.sh
128 ├── apex-policy-test <--- Test apex policy (https://wiki.o-ran-sc.org/pages/viewpage.action?pageId=35881325, it requires simulators to be up)
129 │ ├── apex-policy-test.sh
130 │ └── data
131 ├── enable-sim-fault-report <--- Enable the fault reporting of the network simulators by SDNC
132 │ ├── data
133 │ └── enable-network-sim-fault-reporting.sh
134 └── pythonsdk <--- Test based on ONAP Python SDK to validate O1 and A1
135 ├── oran-tests.xml
136 ├── Pipfile.lock
137 ├── README.md
138 ├── src
139 ├── test.json
140 ├── tox.ini
141 └── unit-tests
### setup microk8s
```
sudo ./dep/smo-install/scripts/layer-0/0-setup-microk8s.sh
```



### setup charts-museum
```
sudo ./dep/smo-install/scripts/layer-0/0-setup-charts-museum.sh
```

### setup helm3
```
sudo ./dep/smo-install/scripts/layer-0/0-setup-helm3.sh
```

### build-all-charts
```
sudo ./dep/smo-install/scripts/layer-1/1-build-all-charts.sh
```
This step maybe will be up to 30 mins.

### install oran
```
sudo ./dep/smo-install/scripts/layer-2/2-install-oran.sh
```


### Verify pods
```
sudo kubectl get pods -n onap && kubectl get pods -n nonrtric
```

This step maybe will be up to 2~3 hours to wait for all pods in "onap" and "nonrtric" namespaces are well up & running
### install-simulators
```
sudo ./dep/smo-install/scripts/layer-2/2-install-simulators.sh
```

When all pods in "onap" and "nonrtric" namespaces are well up & running
### upgrade-simulators
```
sudo ./dep/smo-install/scripts/layer-2/2-upgrade-simulators.sh
```

Check the simulators status
```
sudo kubectl get pods -n network
```
### install-cicd
If you need to use Jenkins CI/CD environmwnt, you can install this and the test envionment.
```
sudo ./dep/smo-install/scripts/layer-2/2-install-cicd.sh
```

### setup-tests-env
```
sudo ./dep/smo-install/scripts/layer-0/0-setup-tests-env.sh
```

## Enable MicroK8s Dashboard to operate the OSC environment
You can enable MicroK8s dashboard and dashboard-proxy addon then use browser to operate K8s environment.
```
sudo microk8s enable dashboard
```

```
sudo microk8s dashboard-proxy
```

You need to use the "Token" to login to the dashboard website, and keep the terminal window opened.
Use the browser to connect to https://127.0.0.1:10443

