# Transmit Power Control for Indoor Small Cells: A Method Based on Federated Reinforcement Learning
###### tags: `5G Reading`
Date : 2022-10-28
## Metadata
[paper link](https://arxiv.org/pdf/2209.13536.pdf)
Li, P., Erdol, H., Briggs, K., Wang, X., Piechocki, R., Ahmad, A., ... & Parekh, A. (2022). Transmit Power Control for Indoor Small Cells: A Method Based on Federated Reinforcement Learning. arXiv preprint arXiv:2209.13536.
## Take away
**What is federated learning**
FL is an ML setting in which **multiple clients collaboratively train a model under the orchestration of a central server**, while keeping the training data decentralized. Due to concerns of privacy and communication efficiency, the training paradigm is that local models need to upload the model parameters to the global model (in the central server), and the global model returns the model parameters after parameters aggregation.
## Summary
This paper discusses the issue of **indoor cell transmit power control** in the context of O-RAN, emphasizing the **room-dependent properties** and lack of generalisation ability of a single RL model; therefore, an **FRL framework** was proposed. The client is in a single indoor environment and learns the best policy by RL. All clients will periodically upload model parameters and integrate them in the global model. The global model will act as the base model for learning in new environments.
## Note
- FRL is a promising and efficient method of RL to create a **distributed paradigm** and so **preserve data privacy**. FRL consists of multiple independent RL agents serving multiple rooms. Each local agent is acting as the cell’s transmit power controller for one room based on the global DQN model, which is **aggregated by an FL algorithm**. Local agents upload their model parameters to the central server every E cycles. Then the global model will be broadcast back to all agents after aggregation, and the global model serves as the pre-trained model for each agent after broadcasting. The RL models will be installed in the RICs of O-RAN through the form of xApps or rApps, to perform local training and parameter uploading, while the global model can be deployed in the central server of network operators.

- **DQN**(deep Q network) is a relatively mature and a **widely used algorithm of RL**. It has been proposed for controlling complex video games only using images. The idea of DQN lies in the use of deep neural networks fθ , to estimate the state (s)-action (a) value (Q value), that it fθ = Q(s, a).
- 