# KL520 toolchain
### pull docker
``` shell
docker pull kneron/toolchain:v0.14.1
```
#### 將本機的的 ~/data1 資料夾 map 到 docker toolchain520中的 /data1 將docker run 起來 關掉就直接刪除container
``` shell
docker run --rm -it -v /home/raoblack/data1:/data1 kneron/toolchain:v0.14.1
```
#### 在跑起來的docker container中移動路徑到/data1
``` shell
cd /data1/
```
#### 建立MobilenetV2
``` shell
mkdir MobilenetV2 && cd MobilenetV2
```
#### 在base的python環境中(在/data1/MobilenetV2 路徑下),儲存model
``` python
from keras.applications.mobilenet_v2 import MobileNetV2
model = MobileNetV2(include_top=True, weights='imagenet')
model.save('MobileNetV2.h5')
```
#### 產生以下結果
```
(base) root@2cecfd77e235:/data1/MobilenetV2# ls
MobileNetV2.h5
```
#### keras h5 轉成 onnx
``` shell
python /workspace/scripts/convert_model.py keras /data1/MobilenetV2/MobileNetV2.h5 /data1/MobilenetV2/MobileNetV2.h5.onnx
```
#### ONNX to ONNX (ONNX optimization)
如果不是用 toolchain這個contiainer將h5轉成的onnx,而是上網下載別人轉好的onnx,那麼該onnx可能不是為了Kneron kl520設計的onnx,所以需要在透過以下指令optimize
``` shell
python /workspace/scripts/convert_model.py onnx input.onnx output.onnx
```
#### 在data1/MobilenetV2/ 中新增一個images 然後隨便放一張jpg (224*224)
``` shell
mkdir images && cd images
cp /workspace/examples/LittleNet/input_params.json ../..
cp /workspace/examples/LittleNet/pytorch_imgs/*.png .
cd ..
```
結果如下

#### 更改 input_params.json
``` shell
nano /data1/input_params.json
```
更改如下,注意**input_1_o0**是 MobilenetV2 input 層的名稱,可以用netron來查看MobileNetV2.h5.onnx得知
``` json
{
"model_info": {
"input_onnx_file": "/data1/MobilenetV2/MobileNetV2.h5.onnx",
"model_inputs": [
{
"model_input_name": "input_1_o0",
"input_image_folder": "/data1/MobilenetV2/images"
}
]
},
"preprocess": {
"img_preprocess_method": "kneron",
"img_channel": "RGB",
"radix": 8,
"keep_aspect_ratio": true,
"pad_mode": 1,
"p_crop": {
"crop_x": 0,
"crop_y": 0,
"crop_w": 0,
"crop_h": 0
}
},
"simulator_img_files": [
{
"model_input_name": "input_1_o0",
"input_image": "/data1/MobilenetV2/images/Abdullah_0001.png"
}
]
}
```
#### input_params.json解釋 (註解都要刪掉才可以用來 compile)
``` json
{
// The basic information of the model needed by the toolchain.
"model_info": {
// The input model file. If you are not sure about the input names, you can
// check it through model visualize tool like [netron](https://netron.app/).
// If the configuration is referred by `batch_input_params.json`, this field
// will be ignored.
"input_onnx_file": "/data1/yolov5s_e.onnx",
// A list of the model inputs name with the absolute path of their corresponding
// inputs image folders. It is required by FP-analysis.
"model_inputs": [{
"model_input_name": "data_out_0" ,
"input_image_folder": "/data1/100_image/yolov5",
}],
// Special mode for fp-analysis. Currently available mode:
// - default: for most of the models.
// - post_sigmoid: recommand for yolo models.
// If this option is not present, it uses the 'default' mode.
"quantize_mode": "default",
// For fp-analysis, remove outliers when calculating max & min. It should be between 0.0 and 1.0.
// If not given, default value is 0.999.
"outlier": 0.999
},
// The preprocess method of the input images.
"preprocess": {
// The image preprocess methods.
// Options: `kneron`, `tensorflow`, `yolo`, `caffe`, `pytorch`
// `kneron`: RGB/256 - 0.5,
// `tensorflow`: RGB/127.5 - 1.0,
// `yolo`: RGB/255.0
// `pytorch`: (RGB/255. -[0.485, 0.456, 0.406]) / [0.229, 0.224, 0.225]
// `caffe`(BGR format) BGR - [103.939, 116.779, 123.68]
// `customized”`: please refer to FAQ question 8
"img_preprocess_method": "kneron",
// The channel information after the input image is preprocessed.
// Options: RGB, BGR, L
// L means single channel.
"img_channel": "RGB",
// The radix information for the npu image process.
// The formula for radix is 7 – ceil(log2 (abs_max)).
// For example, if the image processing method we utilize is "kneron",
// the related image processing formula is "kneron": RGB/256 - 0.5,
// and the processed value range will be (-0.5, 0.5).
// abs_max = max(abs(-0.5), abs(0.5)) = 0.5
// radix = 7 – ceil(log2(abs_max)) = 7 - (-1) = 8
"radix": 8,
// [optional]
// Indicates whether or not to keep the aspect ratio. Default is true.
"keep_aspect_ratio": true,
// [optioanl]
// This is the option for the mode of adding paddings, and it will be utilized only
// when `keep_aspect_ratio` is true. Default is 1.
// Options: 0, 1
// 0 – If the original width is too small, the padding will be added at both right
// and left sides equally; if the original height is too small, the padding
// will be added at both up and down sides equally.
// 1 – If the original width is too small, the padding will be added at the right
// side only, if the original height is too small, the padding will be only
// added at the down side.
"pad_mode": 1,
// [optional]
// The parameters for cropping image. And it has four sub parameters. Default are 0s.
// -crop_x, cropy, the left cropping point coordinate.
// -crop_y, cropy, the up cropping point coordinate.
// -crop_h, the width of the cropped image.
// -crop_w, the height of the cropped image.
"p_crop": {
"crop_x": 0,
"crop_y": 0,
"crop_w": 0,
"crop_h": 0
}
},
// [optional]
// A list of the model inputs name with the absolute path of their corresponding
// input images. It is used by the hardware validator. If this field is not given,
// a random image for each input will be picked up from the `input_image_folder`
// in the `model_info`.
"simulator_img_files": [{
"model_input_name": "data_out" ,
"input_image": "/data1/100_image/yolov5/a.jpg",
}]
}
```
### compile
``` shell
# For KDP520
# python /workspace/scripts/fpAnalyserCompilerIpevaluator_520.py -t thread_number
python /workspace/scripts/fpAnalyserCompilerIpevaluator_520.py -t 8
```
#### 編譯成功

#### 產生以下結果

### Compiler and Evaluator
#### Run
``` shell
# For KDP520
cd /workspace/scripts && ./compilerIpevaluator_520.sh /data1/MobilenetV2/MobileNetV2.h5.onnx
```

### 3.5 Batch-Compile
寫一個 batch_input_params.json, models id = 56 自己亂取的
``` shell
cd /data1/
nano batch_input_params.json
```
``` json
{
"models": [
{
"id": 56,
"version": "1",
"path": "/data1/fpAnalyser/MobileNetV2.h5.quan.wqbi.bie"
}
]
}
```
檔案結構如下
```
(base) root@2cecfd77e235:/data1# ls
MobilenetV2 batch_input_params.json compiler fpAnalyser input_params.json
```
* Run
``` shell
python /workspace/scripts/batchCompile_520.py
```
#### 結果如下

#### output file

### WORK!!
## Run 自己的model 透過kdp2
在 hostlib/build/bin_kdp2 中有一個 **ex_kdp2_generic_inference_raw** 可以用來做自己的*.nef做簡單的預測
### Update firmware
移動到 **build/bin_dkp2**
```
./ex_kdp2_update_firmware
```
在 bin_kdp2 中執行
``` shell
./ex_kdp2_generic_inference_raw -s 1 -m '../../input_models/KL520/kl520_cam_dme_my_example/models_520.nef' -i '../../input_images/one_bike_many_cars_224x224.bmp' -d 57 -c 'RGB565' -n 'none' -p 'bypass' -l 20
```
成功結果如下:


觀察 `build/bin_kdp2`中的 **node0_1x1x2.txt**
看是否與,keras中預測的結果符合

### 特別注意
如果跑過 **kdp2 API**,且用 **./ex_kdp2_update_firmware** 更新過ncpu與scpu韌體
再回去跑build/bin 會無法inference,
需要再次將 host_lib/app_binaries/KL520/ssd_fd_lm 中的
fw_ncpu.bin 與 fw_scpu.bin 放到 host_lib/app_binaries/KL520/ota/ready_to_load 裡面
然後到 build/bin 中執行
```
./kl520_update_app_nef_model
```
這樣才能繼續跑 build/bin裡面範例,因為兩個版本的api,韌體不相容