# <center>xAPPs</center> # NexRAN (RAN Slicing) NexRAN, a top-to-bottom open-source Open RAN use case in the POWDER mobile and wireless research platform. Specifically, NexRAN allows closed-loop control of a RAN slicing realization in an O-RAN ecosystem. RAN slicing is implemented in the srsRAN open source mobility stack and is exposed through a custom service model to the NexRAN xApp, which executes on a RAN intelligent controller (RIC) from the O-RAN Alliance. Our RAN slicing implementation realizes a form of slicing where different slices share the same frequency band, UEs can be explicitly associated with slices, and a slice-aware scheduler in the base station implements the RAN resources associated with each slice. The NexRAN xApp realizes policy driven closed-loop control of RAN slices by reading the current state ofRAN elements (using the standard O-RAN key performance measurements (KPM) service model) and controlling slice behavior via the custom slicing service model. ## NexRAN Design and Implementation ![](https://hackmd.io/_uploads/Bky_i8uA2.png) - added a sliceaware scheduler and an O-RAN E2 agent to srsRAN - E2 is a north-bound interface that connects the RIC with underlying radio equipment, such as eNodeBs and gNodeBs. - The E2 agent implements the core E2 Application Protocol (E2AP), has access to the internal RAN components in the eNodeB’s stack to monitor and modify RAN parameters, and supports E2 service models to export RAN metrics and controls to xApps. NexRAN exposes this functionality, via a RESTful API, to a RAN slicing manager. - The slice manager can create slices, bind/unbind them to multiple eNodeBs, bind/unbind UEs to those slices, and dynamically modify slice resource allocations. ### xApp and Northbound API The NexRAN xApp provides a northbound, RESTful interface for administrative control and monitoring. It defines several primary objects: NodeB, Slice, UE, each of which may be created, updated, and deleted. When a NodeB is created in the xApp, the xApp attempts to subscribe to the NodeB’s events via the E2 protocol. A Slice contains a scheduling policy definition. Administrators may bind Slices to NodeBs; this tells the NodeB’s scheduler that the slice and its associated UEs should be scheduled according to the slice’s policy. Finally, administrators create UE objects to inform NexRAN of particular, known IMSIs that may connect to a NodeB. UEs may be bound to a single Slice at a time; this binding tells the scheduler that the UE should be scheduled in accordance with its parent Slice’s policy. UEs may be unbound from Slices, and Slices unbound from NodeBs, at any time. ### RAN slicing service model The NexRAN service model maps the northbound API onto common E2 abstractions and messages—the xApp sends E2 messages to NodeBs in response to northbound API invocations. Most northbound API create or update operations map to E2 Control messages. ### Slice scheduler The slice scheduler at the eNodeB implements a subframe-based proportional slicing method for data on the physical downlink shared channel (PDSCH) using the slice definitions described by the NexRAN service model, and provided by the slice manager via the xApp. With the exception of a periodic special subframe included to guarantee that UEs which have yet to be identified and associated with a slice are able to attach to the network, each subframe gives ==priority== to a single slice. By default, if the slice with priority in a given subframe doesn’t consume all of the available resources, UEs from other slices may be scheduled after those from the priority slice. Slices are scheduled in a round-robin fashion, each receiving one or more consecutive subframes per round according to their allocation share. Figure 2 shows allocations for a two-slice scenario using a few example shares, which can be described as the ratio A:B of subframes allocated to each slice per round. The columns marked X represent the periodic special subframes. A scheduling round is complete when the proportional allocation defined by the slice manager has been satisfied. ![](https://hackmd.io/_uploads/SJd8KsFC2.png) The scheduler is work-conserving by default, meaning that UEs belonging to the priority slice are scheduled first, followed by UEs belonging to other slices, and finally by UEs not associated with any slice, as long as there are remaining resource blocks. In special subframes, unidentified UEs are scheduled first, followed by the UEs belonging slices, followed by UEs not associated with any slice. UEs in each category are scheduled round-robin within the subframe. ### Policy-driven dynamic slice scheduling The NexRAN xApp allows administrators to configure the proportional allocation scheduler on a per-slice basis, and provides allocation policy extensions through which the xApp can dynamically modify slice resource allocations. We have implemented two such extensions: balanced slice throughput and slice throttling. These extensions monitor per-UE and per-slice throughput and other metrics, via our extended KPM service model, and modify per-slice proportional allocations according to policy and load. The ==balanced slice throughput== extension attempts to drive slices to the same overall throughput, as measured by the KPM service model at the PDCP layer. This mechanism sums the total bandwidth used by all auto-equalized slices in each new KPM report, checks if any slices have diverged from an equal distribution, and if so, computes new share values (proportions) for each slice. This mechanism is only invoked if at least 30% of the reporting NodeB’s available PRBs were used, so that low-throughput slices are not unfairly starved. The ==slice throttling== extension attempts to prevent slices from consuming too much bandwidth in a given time period. It accepts several parameters: throttle_period, throttle_threshold, and throttled_share. When throttle_threshold throughput is exceeded within any throttle_period window, the slice’s share is set to throttled_share for a duration of throttle_period, and when the period ends, throttling is removed. The policy maintains its threshold counters during throttling, and per-period throughput is not reset at the end of a throttle_period. ## NexRAN Workflow Analyzing [this script](https://github.com/openaicellular/nexran/blob/master/zmqoneue.sh) provides a clear view of how the NexRAN xApp functions in practice. 1. **Command Line Arguments**: The script takes one command-line argument, `$SLEEPINT`, which presumably determines the duration of sleep between different actions. If no argument is provided, it defaults to 4 seconds. 2. **NEXRAN_XAPP Environment Variable**: The script sets an environment variable named `NEXRAN_XAPP` by querying Kubernetes for the cluster IP of a service in the `ricxapp` namespace. This service is named either `service-ricxapp-nexran-nbi` or `service-ricxapp-nexran-rmr`. If both are empty, it prints an error and exits. 3. **HTTP Requests with cURL**: The script uses the `curl` command to make a series of HTTP requests to the endpoint determined by the `NEXRAN_XAPP` variable. These requests interact with some RESTful APIs exposed by the service at that endpoint. 4. **Listing NodeBs, Slices, and Ues**: It first lists NodeBs, slices, and UEs by sending GET requests to the appropriate API endpoints. 5. **Creating NodeB**: It creates a NodeB by sending a POST request with JSON data to the `/v1/nodebs` endpoint. The NodeB has some specific attributes like type, ID, MCC, and MNC. The response is stored in the `OUTPUT` variable. 6. **Checking NodeB Creation**: It checks whether the NodeB creation was successful by examining the `OUTPUT` variable. If it's empty, it prints an error and exits. 7. **Creating Slices**: It creates two slices named "fast" and "slow" by sending POST requests with JSON data to the `/v1/slices` endpoint. The JSON data includes the slice name and allocation policy. The script specifies an allocation policy for the "fast" slice, which is defined as "proportional" with a share of 1024. Similar to the "fast" slice, the "slow" slice also has an allocation policy defined as "proportional," but with a share of 256. This suggests that the "fast" slice is configured to have a higher priority or a larger share of network resources compared to the "slow" slice. 8. **Binding Slices to NodeB**: It binds both "fast" and "slow" slices to the previously created NodeB using POST requests to the respective slice URLs under `/v1/nodebs/${NBNAME}/slices`. 9. **Creating Ue**: It creates a UE (User Equipment) with a specific IMSI (International Mobile Subscriber Identity) by sending a POST request with JSON data to the `/v1/ues` endpoint. 10. **Binding Ue to Slice**: It binds the UE to the "fast" slice using a POST request to the `/v1/slices/fast/ues/${IMSI}` endpoint. 11. **Final Output**: The script prints out various HTTP responses, the NodeB name (`NBNAME`), and the IMSI of the UE. 12. **Sleep Intervals**: Between each action, there's a sleep interval based on the value of `$SLEEPINT`. This is likely used to control the rate at which actions are performed. ## Running the xApp - Iperf test on client side ```bash ------------------------------------------------------------ Client connecting to 172.16.0.2, TCP port 5010 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 172.16.0.1 port 42520 connected with 172.16.0.2 port 5010 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 4.0 sec 1.50 MBytes 3.15 Mbits/sec [ 3] 4.0- 8.0 sec 1.75 MBytes 3.67 Mbits/sec [ 3] 8.0-12.0 sec 3.06 MBytes 6.41 Mbits/sec [ 3] 12.0-16.0 sec 1.68 MBytes 3.52 Mbits/sec [ 3] 16.0-20.0 sec 1.80 MBytes 3.78 Mbits/sec [ 3] 20.0-24.0 sec 1.68 MBytes 3.52 Mbits/sec [ 3] 24.0-28.0 sec 1.18 MBytes 2.48 Mbits/sec [ 3] 28.0-32.0 sec 1.86 MBytes 3.90 Mbits/sec [ 3] 32.0-36.0 sec 2.36 MBytes 4.95 Mbits/sec [ 3] 36.0-40.0 sec 1.74 MBytes 3.65 Mbits/sec ``` - Deploying the xApp - Errors: - Solved `ConnectionRefusedError: [Errno 111] Connection refused` **No service is listening on the port 5010** - slight change in the throughput ```bash [ 3] 1424.0-1428.0 sec 3.17 MBytes 6.65 Mbits/sec [ 3] 1428.0-1432.0 sec 1.74 MBytes 3.65 Mbits/sec [ 3] 1432.0-1436.0 sec 954 KBytes 1.95 Mbits/sec [ 3] 1436.0-1440.0 sec 2.98 MBytes 6.26 Mbits/sec [ 3] 1440.0-1444.0 sec 1.43 MBytes 3.01 Mbits/sec [ 3] 1444.0-1448.0 sec 2.30 MBytes 4.83 Mbits/sec [ 3] 1448.0-1452.0 sec 1.98 MBytes 4.16 Mbits/sec [ 3] 1452.0-1456.0 sec 1.99 MBytes 4.17 Mbits/sec [ 3] 1456.0-1460.0 sec 1.55 MBytes 3.26 Mbits/sec [ 3] 1460.0-1464.0 sec 830 KBytes 1.70 Mbits/sec [ 3] 1464.0-1468.0 sec 1.86 MBytes 3.90 Mbits/sec ``` - large change in the throughput ```bash [ 3] 1880.0-1884.0 sec 827 KBytes 1.69 Mbits/sec [ 3] 1884.0-1888.0 sec 827 KBytes 1.69 Mbits/sec [ 3] 1888.0-1892.0 sec 636 KBytes 1.30 Mbits/sec [ 3] 1892.0-1896.0 sec 573 KBytes 1.17 Mbits/sec [ 3] 1896.0-1900.0 sec 318 KBytes 652 Kbits/sec [ 3] 1900.0-1904.0 sec 827 KBytes 1.69 Mbits/sec [ 3] 1904.0-1908.0 sec 445 KBytes 912 Kbits/sec [ 3] 1908.0-1912.0 sec 127 KBytes 261 Kbits/sec [ 3] 1912.0-1916.0 sec 700 KBytes 1.43 Mbits/sec ``` # KPIMON The KPIMON xApp enables collecting metrics from E2 nodes. The RAN nodes’ metrics, such as the number of used and available physical resource blocks (PRBs), the number of connected UEs, the downlink and uplink data rates, and so on are collected by the E2 agent. These metrics are packaged in containers. Each container has its own ID with a header to determine the related RAN node (CU, DU, …). These metrics are compiled to form the indication message and will be encoded using ASN.1 encoding. The KPIMON xApp periodically receives these indication messages and uses the same ASN.1 and service model definitions to decode them and get the metrics. KPIMON uses Redis for data storage. Redis is an open-source in-memory data storage that is used as the RIC database. In order to share metrics between KPIMON and the new xApps, a time series database like influxDB can play the role of the sharing layer. Fig. 3 shows the designed architecture for the resource allocation xApp. xApp architecture with implemented O-RAN modules and interfaces: ![](https://hackmd.io/_uploads/HJI8vviy6.png) ## Deployment **Repository**: https://github.com/openaicellular/upgraded-kpimon-xApp **Deployment Guide**: https://openaicellular.github.io/oaic/kpimon-workshop.html # xApp Development Python Frame: [ricxappframe](https://pypi.org/project/ricxappframe/#description) Complete Documentation: https://docs.o-ran-sc.org/projects/o-ran-sc-ric-plt-xapp-frame-py/en/latest/