###### tags: `Neural Network Control`
# Neuro-control Systems 類神經控制系統 - HW1
## Outline
* **Question**
* **MLP**
* **RBF**
* **RMSE Comparison**
## Question
Using MLP and RBF network respectively to approximate the following function. A training set (𝑥𝑖,𝑦𝑖) with 5000 data are created where 𝑥𝑖 are uniformly randomly distributed in [−10,10]. Large uniform noise distributed in [−0.2, 0.2] is added to all the training data.
#### <center>y(x) = 0.6sin(πx) + 0.3sin(3πx) + 0.1sin(5πx)</center>

## MLP
### Prerequirement
In our code we use Tensorflow, Keras, Numpy and Matplotlib. The Python version is 3.9.12 .
### Structure
Here we use 7 hidden layers with 32 units per layer. The activation function is `tanh` , the loss function and optimizer are `mean_squared_error` and `adam` repectively.
*Total params: 7,489
Trainable params: 7,489
Non-trainable params: 0*

### Output
RMSE with noise: 0.6804642958446606
RMSE of actual: 0.6710275037300814
loss: 0.026710696518421173


## RBF
### Prerequirement
In Keras there is no official RBF layer. So we use the source code from PetraVidnerova on her gitHub. The source code and other examples provided by her is given [here](https://github.com/PetraVidnerova/rbf_keras). On who uses the code **must download** the `rbflayer.py` file from the link above, put it in the same folder of the main code.
Generate a text file `data.txt` and put it in the same folder as well. In our code, the first block will save the data sets in the text file.
### Structure
We have only two layers in total, one for RBF and one for output. The units of RBF layer is set to 50, which we find out that this is the best model. We tried to put one more layer and do some adjustment on the units, but the performance was bad.
The loss function is `mean_squared_error` and optimizer is `RMSprop()`.
*Total params: 151
Trainable params: 151
Non-trainable params: 0*

### Output
RMSE with noise: 0.14038921959132608
RMSE of actual: 0.6728185255474752
loss: 0.019972974434494972


## RMSE Comparison
Comparing to the data with noise, RBF has better regression respect to MLP, but this could cause overfitting if we use the model to do prediction.
When observing the original data, both MLP and RBF has similar RMSE which is about 0.67 .
| | MLP | RBF |
|:---------------------- |:------ | ------ |
| RMSE respect to origin | 0.6710 | 0.6728 |
| RMSE respect to noise | 0.6804 | 0.1403 |