---
tags: NCTU-RL
---
# RL Project Weight Agnostic Neural Networks
https://hackmd.io/NZJdT-2wS5qBzrj2kN92Fg
### experimental results
```
for i in len(activation_lst)
if biped remove activation[i] > biped total activation:
This activation function is not important
else:
This activation function is important
```
```
case 1 -- Linear
case 2 -- Unsigned Step Function
case 3 -- Sin
case 4 -- Gausian with mean 0 and sigma 1
case 5 -- Hyperbolic Tangent [tanh] (signed)
case 6 -- Sigmoid unsigned [1 / (1 + exp(-x))]
case 7 -- Inverse
case 8 -- Absolute Value
case 9 -- Relu
case 10 -- Cosine
case 11 -- Squared
```
biped use two activation function(connection + one)
(1)

(2)

(3)

biped remove activation function
(4)

(5)

(6)

step could be not needed
inverse could be suitable for alone.



#### Continuous control
* do three tasks
1. CartPoleSwingUp(balance pole)

2. BipedalWalker(兩腳獸走路,我們沒做)

3. CarRacer (賽車,最複雜)

參考以下兩篇論文的程式碼,手動設計神經網路拓埔(沒特別講細節)
> 1.Reinforcement Learning for Improving Agent Design.
> D. Ha. arXiv:1810.03779. 2018.
> 2.Recurrent World Models Facilitate Policy Evolution
> D. Ha, J. Schmidhuber.
> Advances in Neural Information Processing Systems 31, pp. 2451—2463. Curran Associates, Inc. 2018.
設計實驗:
1.use Random weight (use several weight)
2.use shared-Random weight(use singal )
3.use Tuned shared weight (use singal weight)
4.use Tuned weight(use several weight)

outcome:
1. conventional topology is only useful in turned several weight designed.
WANN is suitable for shared-weight design without reguiring the use of gradient-based method.
2. In WANN,the best performing shared weight value produces satisfactory if not optimal behavior.
3. When WANN weight is tuned ,the prdisposition doesn't prevent them from reaching similar performance.
4. WANN can use less connection.

We can find WANNs outside of reinforcement learning domains.
Use in classification
Even in this high-dimensional classification task WANNs perofrm remarkably well

WANN can add one layer neural network with thousands of weights.
The archetectures created still main the flexibility to allow weught training.

## 目標
測試不同演化方法造成的差異
~~使用不同於paper的實驗環境~~
5/22(五)下午三點半meeting(上課後)
5/27(二)下午兩點 meeting
## 時程表
### 5/18 ~ 5/25
Trace code
修改網路的部份(activate function)
node拿掉如何?

### 5/25 ~ 6/01
Implement
大家如果知道工具這麼用,可以寫個教學在下面
#### 畫網路架構程式用自己的model, activation function 會有問題
大家可以測試一下
解決方案:TODO
#### 工作分配
博鈞:寫輔助工具,使之後移除神經元的實驗比較好進行,找weight在哪裡設定
孟寰:確認畫model的程式是否有問題,確定激勵函數時從哪裡挑選的,會不會有編號1, 5以外的
紹雄:賽車環境測試,目標:WANN跟現有RL(ppo等),進行效能比較
#### 結果
##### 調整weight的作法
https://sourcegraph.com/github.com/google/brain-tokyo-workshop@73eb4531746825203a3c591896a79ac563d393e7/-/blob/WANNRelease/prettyNeatWann/p/hypkey.txt
在設定檔中的`alg_wDist`, `alg_nVals`進行修改
```
alg_wDist - (string) - "standard": 6 chosen values ([-2,-1,-0.5,0.5,1,2])
"other": linspace of alg_nVals between weight caps
alg_nVals - (int) - number of weights to test when evaluating individual
```
code 確切的位置
https://sourcegraph.com/github.com/google/brain-tokyo-workshop@73eb4531746825203a3c591896a79ac563d393e7/-/blob/WANNRelease/prettyNeatWann/domain/wann_task_gym.py
##### 選擇使用的激勵函數
根據這段程式碼 `neat_src/neat.py`
https://sourcegraph.com/github.com/google/brain-tokyo-workshop@73eb4531746825203a3c591896a79ac563d393e7/-/blob/WANNRelease/prettyNeatWann/neat_src/neat.py#L156
需要在你的設定檔 `p.json` 中加入
```
"alg_act": 0
```
之後修改 `domain/config.py` ,你所需環境的 `h_act` 就是會產生出的激勵函數
https://sourcegraph.com/github.com/google/brain-tokyo-workshop@73eb4531746825203a3c591896a79ac563d393e7/-/blob/WANNRelease/prettyNeatWann/domain/config.py#L101:15

### 分配實驗
有做的可以自己打勾,能的話分享一些結果上來
#### 移除神經元(能的話附上網路圖,確定沒有生成這個神經元)
[] actID 1 - 4
[] actID 5 - 7
[] actID 8 - 11
不同環境 swingup, racer
[] 不同activate function ex. leaky_ReLu;,elu
6/01 ~ 6/06
完成文件+專案
## 方便使用的版本
### 下載修改後的版本
`git clone https://github.com/ex7763/WANN`
### 激勵函數編號
```
case 1 -- Linear
case 2 -- Unsigned Step Function
case 3 -- Sin
case 4 -- Gausian with mean 0 and sigma 1
case 5 -- Hyperbolic Tangent [tanh] (signed)
case 6 -- Sigmoid unsigned [1 / (1 + exp(-x))]
case 7 -- Inverse
case 8 -- Absolute Value
case 9 -- Relu
case 10 -- Cosine
case 11 -- Squared
```
### 訓練
進入`prettyNeatWann`
```
chmod +x train.sh
./train.sh "env_name" "[list of activation function you want]"
# example, the "quotation mark" is nee
./train.sh swingup "[1, 2, 3, 4, 5, 6, 7]"
./train.sh biped "[1, 3, 4, 7]"
```
訓練完後產生的`model`在`log`的資料夾,檔名為`<env_name>_<date>`
如下圖中的swingup_2020-06-01_13-43-23.out...

### 繪製模型
下面的檔名`swingup_2020-06-01_13-43-23.out_best.out`,根據上面的結果進行代換,環境`swingup`也是
```
python draw.py --model_path log/swingup_2020-06-01_13-43-23.out_best.out --env=swingup
```

### 測試模型
```
$ python wann_test.py -p my_s.json -i log/swingup_2020-06-01_13-43-23.out_best.out
*** Running with hyperparameters: my_s.json ***
[***] Fitness: [ 89.21 124.45 82.87 3.96 7.16 5.44]
[***] Weight Values: [-2. -1. -0.5 0.5 1. 2. ]
```
### 修改超參數(訓練次數等)
修改 `my_s.json`,並參考`https://sourcegraph.com/github.com/google/brain-tokyo-workshop@73eb4531746825203a3c591896a79ac563d393e7/-/blob/WANNRelease/prettyNeatWann/p/hypkey.txt`中的說明
## 簡易使用方法
### 下載
`git clone https://github.com/google/brain-tokyo-workshop`
### 用WANN訓練
`cd brain-tokyo-workshop/WANNRelease/WANN`
### Ctrl-c 中斷後子線程可能還留著
這段 shell script 可以幫助你解決這個問題
```
kill -9 `ps aux | grep wann_train | tr -s ' ' | cut -d" " -f2`
```
#### 使用 cartpole
`python wann_train.py -p p/laptop_swing.json -n 8`
### 測試
#### 使用 cartpole
`python wann_test.py -p p/swingup.json -i champions/swing.out --nReps 3 --view True`
## 參數說明
`-p` 會讀入設定檔,p 這個資料夾放了許多不同環境的設定檔
`-n` 使用多少個thread去跑
`task` 使用的環境
`maxGen` 演化(訓練)的回合數
### cartpole
```
{
"task":"swingup",
"maxGen": 1024,
"alg_nReps": 3,
"popSize": 192,
"select_eliteRatio": 0.2,
"select_tournSize": 8
}
```
### bipedssss
```
{
"task":"biped",
"alg_nReps": 4,
"maxGen": 2048,
"popSize": 480,
"prob_initEnable": 0.25,
"select_tournSize": 16
}
```
### vae_racer
程式有BUG而且需額外下載檔案(待補充
python wann_train.py -p p/vaeRacer.json -n 4
python wann_test.py -p p/vaeRacer.json -i log/0032.out --nReps 3 --view True
## code
### 簡化板(?) https://github.com/google/brain-tokyo-workshop/blob/master/WANNRelease/WANN/wann_src/ind.py
1. applyAct: forward時激勵函數的功能
## 比較完整的版本 https://github.com/google/brain-tokyo-workshop/tree/master/WANNRelease/prettyNeatWann
1. WANNRelease/prettyNeatWann/neat_src/wann_ind.py topoMutate: 修改神經元
`nodeG[2, :]`: 是激勵函數的位置
如果出現我們不要的就把他取消?
```
child - (Ind) - individual to be mutated
.conns - (np_array) - connection genes
[5 X nUniqueGenes]
[0,:] == Innovation Number (unique Id)
[1,:] == Source Node Id
[2,:] == Destination Node Id
[3,:] == Weight Value
[4,:] == Enabled?
.nodes - (np_array) - node genes
[3 X nUniqueGenes]
[0,:] == Node Id
[1,:] == Type (1=input, 2=output 3=hidden 4=bias)
[2,:] == Activation function (as int)
```
## 生成網路結構的範本
在WANNRelease/prettyNeatWann 建立一個檔案,內容如下即可
注:你可能需要安裝一些必要套件,pip應該就可以解決
將`model_path`, `env`改成你需要的即可
### parameter
`--model_path`: 模型位置
`--env`: 環境名稱, ex:`swingup`
### code
```
from vis.viewInd import viewInd
from matplotlib import pyplot as plt
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--model_path", default="./champions/swing.out")
parser.add_argument("--env", default="swingup")
args = parser.parse_args()
model_path = args.model_path
env = args.env
viewInd(model_path, env)
plt.show()
```
## Reference
https://github.com/google/brain-tokyo-workshop/tree/master/WANNRelease/WANNTool
https://sourcegraph.com/github.com/google/brain-tokyo-workshop@master/-/blob/WANNRelease/WANNTool/ann.py#L47

A.2.5

## paper
https://arxiv.org/pdf/1906.04358.pdf
https://weightagnostic.github.io/