# **2021/07/26**
[[16 Collecting your own Detection Datasets]](https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-collect-detection.md)
###### tags: `藍柏婷`
###### tags: `2021/07/26`
### **==== 收集您自己的檢測數據集 Collecting your own Detection Datasets ====**
#### **== 創建標籤文件 ==**
在`jetson-inference/python/training/detection/ssd/data`下建立一個空資料夾`chicks`,在`chicks`下建立一個txt檔`labels.txt`存物件類別名稱。
#### **== 途中遇到問題 ==**
:::warning

由以上圖片可以看出`data`資料夾的擁有者是`root`(是docker裡面的),而我若不進入docker container,便無法操作此資料夾。
(且`data`資料夾圖示的右下角會有一個鎖的標示)
:::
:::info
打開權限的方式:進入`data`所在的資料夾地址`Desktop/scifair/image_classification_demo/jetson-inference/python/training/detection/ssd`,輸入
$ sudo chmod -R 777 data
>```
>$ sudo chmod -R 777 工作目錄
>```
>***sudo**:是linux系統管理指令,是允許系統管理員讓普通使用者執行一些或者全部的root命令的一個工具。*
>***-R**:是指幾連醫用到目錄裡所有子目錄和檔案。*
>***777**:是指所有使用者都擁有的最高許可權。*
>***工作目錄**:改成要解鎖權限的檔案or資料夾。*
>*--- by https://www.itread01.com/content/1540975029.html*\
輸入完後,若`data`資料夾圖示右下角鎖的標示消失,代表解鎖成功。
(就可以直接新增檔案啦!)
:::
#### **== 重新創建標籤文件 Recreating the Label File ==**
在`jetson-inference/python/training/detection/ssd/data`下建立一個空資料夾`chicks`,在`chicks`下建立一個txt檔`labels.txt`存物件類別名稱,照英文字母排序:`bee` `christmas` `eagle` `honey` `pongpong`。
(詳情請見`2021/07/19&21`的`==== (重新嘗試)收集您自己的分類數據集 Collecting your own Classification Datasets ====`,更改:雞種不同)
#### **== 啟動工具 Launching the Tool ==**
在`jetson-inference/python/training/detection/ssd/data`中,輸入
$ camera-capture /dev/video0 #使用 V4L2 攝像頭 /dev/video0


* 把`Dataset Type`更變成`Detection`模式
* 把`Dataset Path`設置為`ssd/data/chicks`
* 把`Class Labels`設置為`ssd/data/chicks/labels.txt`
* 喬好物件後,按下`Freeze/Edit`按鍵或`space`,使畫面凍結
* 凍結後,鼠標會變成十字狀,即可框出畫面中的物件,並選擇標出類別
* 完成後,再按下`Freeze/Edit`按鍵或`space`,使畫面解凍並自動儲存
* **Repeat!!!**
>*控制窗口中的其他小部件包括:*
>* `Save on Unfreeze` - 解凍時自動保存數據
>* `Clear on Unfreeze` - 在解凍時自動移除之前的邊界框
>* `Merge Sets` - 在訓練集、驗證集和測試集之間保存相同的數據
>* `Current Set` - 從訓練/驗證/測試集中選擇
> * 對於物體檢測,您至少需要訓練和測試集
> * 雖然如果你檢查Merge Sets,數據將被複製為訓練、驗證和測試
>* `JPEG Quality` - 控制保存圖像的編碼質量和磁盤大小
>
>*--- by Dusty-nv*
#### **== 訓練你的模型 Training your Model ==**
進入`jetson-inference/python/training/detection/ssd`,輸入
$ python3 train_ssd.py --dataset-type=voc --data=data/<YOUR-DATASET> --model-dir=models/<YOUR-MODEL>
>ex. `$ python3 train_ssd.py --dataset-type=voc --data=data/chicks --epochs=10 --model-dir=models/chicks`
>>`--epochs` 更改要運行的紀元數(默認為 35)
>>`--batch-size` 嘗試根據可用內存增加(默認為 4)
>>`--workers` 數據加載器線程數(0 = 禁用多線程)(默認為 2)
跑出的結果->
```javascript
iamai2021@iamai2021:~/Desktop/scifair/image_classification_demo/jetson-inference/python/training/detection/ssd$ python3 train_ssd.py --dataset-type=voc --data=data/chicks --epochs=10 --model-dir=models/chicks
2021-07-26 13:28:11 - Using CUDA...
2021-07-26 13:28:11 - Namespace(balance_data=False, base_net=None, base_net_lr=0.001, batch_size=4, checkpoint_folder='models/chicks', dataset_type='voc', datasets=['data/chicks'], debug_steps=10, extra_layers_lr=None, freeze_base_net=False, freeze_net=False, gamma=0.1, lr=0.01, mb2_width_mult=1.0, milestones='80,100', momentum=0.9, net='mb1-ssd', num_epochs=10, num_workers=2, pretrained_ssd='models/mobilenet-v1-ssd-mp-0_675.pth', resume=None, scheduler='cosine', t_max=100, use_cuda=True, validation_epochs=1, weight_decay=0.0005)
2021-07-26 13:28:11 - Prepare training datasets.
warning - image 20210726-121011 has no box/labels annotations, ignoring from dataset
warning - image 20210726-121357 has no box/labels annotations, ignoring from dataset
warning - image 20210726-121544 has no box/labels annotations, ignoring from dataset
2021-07-26 13:28:11 - VOC Labels read from file: ('BACKGROUND', 'bee', 'christmas', 'eagle', 'honey', 'pongpong')
2021-07-26 13:28:11 - Stored labels into file models/chicks/labels.txt.
2021-07-26 13:28:11 - Train dataset size: 17
2021-07-26 13:28:11 - Prepare Validation datasets.
warning - image 20210726-124357 has no box/labels annotations, ignoring from dataset
2021-07-26 13:28:11 - VOC Labels read from file: ('BACKGROUND', 'bee', 'christmas', 'eagle', 'honey', 'pongpong')
2021-07-26 13:28:11 - Validation dataset size: 4
2021-07-26 13:28:11 - Build network.
2021-07-26 13:28:11 - Init from pretrained ssd models/mobilenet-v1-ssd-mp-0_675.pth
2021-07-26 13:28:12 - Took 0.51 seconds to load the model.
2021-07-26 13:29:11 - Learning rate: 0.01, Base net learning rate: 0.001, Extra Layers learning rate: 0.01.
2021-07-26 13:29:11 - Uses CosineAnnealingLR scheduler.
2021-07-26 13:29:11 - Start training from epoch 0.
/home/iamai2021/.local/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:123: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
/home/iamai2021/.local/lib/python3.6/site-packages/torch/nn/_reduction.py:44: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
2021-07-26 13:33:10 - Epoch: 0, Validation Loss: 8.2600, Validation Regression Loss 2.3262, Validation Classification Loss: 5.9338
2021-07-26 13:33:11 - Saved model models/chicks/mb1-ssd-Epoch-0-Loss-8.260007858276367.pth
2021-07-26 13:35:18 - Epoch: 1, Validation Loss: 6.4462, Validation Regression Loss 2.0853, Validation Classification Loss: 4.3609
2021-07-26 13:35:20 - Saved model models/chicks/mb1-ssd-Epoch-1-Loss-6.44620418548584.pth
2021-07-26 13:45:20 - Epoch: 2, Validation Loss: 6.2639, Validation Regression Loss 2.8347, Validation Classification Loss: 3.4293
2021-07-26 13:45:24 - Saved model models/chicks/mb1-ssd-Epoch-2-Loss-6.2639265060424805.pth
2021-07-26 13:58:30 - Epoch: 3, Validation Loss: 5.9542, Validation Regression Loss 2.7569, Validation Classification Loss: 3.1973
2021-07-26 13:58:55 - Saved model models/chicks/mb1-ssd-Epoch-3-Loss-5.954225540161133.pth
2021-07-26 14:16:08 - Epoch: 4, Validation Loss: 5.9305, Validation Regression Loss 2.3554, Validation Classification Loss: 3.5751
2021-07-26 14:16:10 - Saved model models/chicks/mb1-ssd-Epoch-4-Loss-5.930512428283691.pth
2021-07-26 14:27:13 - Epoch: 5, Validation Loss: 6.6751, Validation Regression Loss 1.8063, Validation Classification Loss: 4.8688
2021-07-26 14:27:34 - Saved model models/chicks/mb1-ssd-Epoch-5-Loss-6.675135135650635.pth
2021-07-26 14:42:45 - Epoch: 6, Validation Loss: 7.0217, Validation Regression Loss 2.1538, Validation Classification Loss: 4.8679
2021-07-26 14:42:47 - Saved model models/chicks/mb1-ssd-Epoch-6-Loss-7.021711349487305.pth
2021-07-26 14:46:11 - Epoch: 7, Validation Loss: 6.1284, Validation Regression Loss 1.8550, Validation Classification Loss: 4.2734
2021-07-26 14:46:13 - Saved model models/chicks/mb1-ssd-Epoch-7-Loss-6.128396987915039.pth
2021-07-26 14:49:51 - Epoch: 8, Validation Loss: 4.6806, Validation Regression Loss 1.6064, Validation Classification Loss: 3.0743
2021-07-26 14:49:52 - Saved model models/chicks/mb1-ssd-Epoch-8-Loss-4.680648326873779.pth
2021-07-26 14:54:14 - Epoch: 9, Validation Loss: 4.2688, Validation Regression Loss 1.4987, Validation Classification Loss: 2.7702
2021-07-26 14:54:16 - Saved model models/chicks/mb1-ssd-Epoch-9-Loss-4.268825531005859.pth
2021-07-26 14:54:16 - Task done, exiting program.
iamai2021@iamai2021:~/Desktop/scifair/image_classification_demo/jetson-inference/python/training/detection/ssd$
```
>共耗時1小時26分鐘06秒
#### **== 將模型轉換為 ONNX (Converting the Model to ONNX) ==**
$ python3 onnx_export.py --model-dir=models/<YOUR-MODEL>
>ex. `$ python3 onnx_export.py --model-dir=models/chicks`
>>* 轉換後的模型將保存在`<YOUR-MODEL>/ssd-mobilenet.onnx`
>>* 大概會耗時4~5分鐘
#### **== 運行實時攝像頭程序 Running the Live Camera Program ==**
$ NET=models/<YOUR-MODEL>
$ detectnet --model=$NET/ssd-mobilenet.onnx --labels=$NET/labels.txt \
--input-blob=input_0 --output-cvg=scores --output-bbox=boxes \
/dev/video0
>ex.
>`NET=models/chicks`
>
>`detectnet --model=$NET/ssd-mobilenet.onnx --labels=$NET/labels.txt \`
>`--input-blob=input_0 --output-cvg=scores --output-bbox=boxes \`
>`/dev/video0`
#### **== 跑出之影像結果 ==**
>**= 辨識成功圖示 =**
>
>
>(稍微大隻一點感覺成功率比較高)
>**= 辨識失敗圖示 =**
>
>
>(小隻的完全掃描不到!!!)
>
>
>(看來大小辨識還有待加強!(上面那隻才是胖胖):laughing:)
---
### **==== Last ====**
這是Hello AI World教程的最後一步,該教程涵蓋了使用 **TensorRT** 和 **PyTorch** 在 Jetson 上進行推理和遷移學習。
回顧一下,我們一起涵蓋了:
* 使用圖像識別網絡對圖像和視頻進行分類
* 用 Python 和 C++ 編寫自己的推理程序
* 執行對象檢測以定位對象坐標
* 使用全卷積網絡分割圖像和視頻
* 使用遷移學習使用 PyTorch 重新訓練模型
* 收集您自己的數據集並訓練您自己的模型
接下來,我們鼓勵您嘗試並將學到的知識應用到其他項目中,或許可以利用 Jetson 的嵌入式外形——例如自主機器人或基於智能相機的系統。以下是一些您可以嘗試的示例想法:
* 當檢測到物體時,使用 GPIO 觸發外部執行器或 LED
* 可以找到或跟隨物體的自主機器人
* 手持電池供電相機 + Jetson + 迷你顯示器
* 為您的寵物準備的互動玩具或零食分配器
* 迎接客人的智能門鈴攝像頭
有關激發您創造力的更多示例,請參閱 [Jetson 項目](https://developer.nvidia.com/embedded/community/jetson-projects)頁面。祝好運並玩得開心點!
--- by Dusty-nv
>This is the last step of the Hello AI World tutorial, which covers inferencing and transfer learning on Jetson with TensorRT and PyTorch.
>
>To recap, together we've covered:
>
>* Using image recognition networks to classify images and video
>* Coding your own inferencing programs in Python and C++
>* Performing object detection to locate object coordinates
>* Segmenting images and video with fully-convolutional networks
>* Re-training models with PyTorch using transfer learning
>* Collecting your own datasets and training your own models
>
>
>Next we encourage you to experiment and apply what you've learned to other projects, perhaps taking advantage of Jetson's embedded form-factor - for example an autonomous robot or intelligent camera-based system. Here are some example ideas that you could play around with:
>
>* use GPIO to trigger external actuators or LEDs when an object is detected
>* an autonomous robot that can find or follow an object
>* a handheld battery-powered camera + Jetson + mini-display
>* an interactive toy or treat dispenser for your pet
>* a smart doorbell camera that greets your guests
>
>For more examples to inspire your creativity, see the [Jetson Projects](https://developer.nvidia.com/embedded/community/jetson-projects) page. Good luck and have fun!
>--- by Dusty-nv