# **2021/07/19&21**
[[14 Collecting your own Classification Datasets]](https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-collect.md)
###### tags: `藍柏婷`
###### tags: `2021/07/19`
### **==== 收集您自己的分類數據集 Collecting your own Classification Datasets ====**
#### **== 創建標籤文件 Creating the Label File ==**
1. 在`jetson-inference/python/training/classification/data`下創建一個資料夾`<YOUR-MODEL>`。
>ex.`<pens>`
2. 在`<YOUR-MODEL>`下建立一個txt檔存類別,在txt檔中按英文排序列出類別。
>ex. `pens/labels.txt` 類別:`black` `blue` `red`
3. 完成後,即會有以下結構
>ex.
>```
>‣ train/
> • black/
> • blue/
> • red/
>‣ val/
> • black/
> • blue/
> • red/
>‣ test/
> • black/
> • blue/
> • red/
>```
#### **== 啟動工具 Launching the Tool ==**
回到`jetson-inference`,進入`tools` ,輸入
$ camera-capture /dev/video0
#### **== 收集數據 Collecting Data ==**
#### **== 訓練你的模型 Training your Model ==**
進入`jetson-inference/python/training/classification`
$ python3 train.py --model-dir=models/<YOUR-MODEL> data/<YOUR-DATASET>
>ex. `python3 train.py --model-dir=models/pens --epochs=5 data/pens`
>>**註:Training是拿`data`中的資料去訓練,最後將訓練結果存於`models`中的`pens`**
>`--epochs` 更改要運行的紀元數(默認為 35)
>`--arch` 更改要訓練的網絡模型(默認為 ResNet-18)
>`--batch` 更改每個批次的大小(默認為 8)
```javascript
iamai2021@iamai2021:~/Desktop/scifair/image_classification_demo/jetson-inference/python/training/classification$ python3 train.py --model-dir=models/pens --epochs=5 data/pens
Use GPU: 0 for training
=> dataset classes: 3 ['black', 'blue', 'red']
=> using pre-trained model 'resnet18'
=> reshaped ResNet fully-connected layer with: Linear(in_features=512, out_features=3, bias=True)
Epoch: [0][0/5] Time 229.254 (229.254) Data 14.528 (14.528) Loss 1.3136e+00 (1.3136e+00) Acc@1 37.50 ( 37.50) Acc@5 100.00 (100.00)
Epoch: [0] completed, elapsed time 377.469 seconds
Test: [0/2] Time 50.308 (50.308) Loss 8.7126e+16 (8.7126e+16) Acc@1 0.00 ( 0.00) Acc@5 100.00 (100.00)
* Acc@1 33.333 Acc@5 100.000
saved best model to: models/pens/model_best.pth.tar
Epoch: [1][0/5] Time 94.127 (94.127) Data 53.783 (53.783) Loss 1.1646e+01 (1.1646e+01) Acc@1 37.50 ( 37.50) Acc@5 100.00 (100.00)
Epoch: [1] completed, elapsed time 135.365 seconds
Test: [0/2] Time 184.114 (184.114) Loss 3.0566e+08 (3.0566e+08) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
* Acc@1 33.333 Acc@5 100.000
saved checkpoint to: models/pens/checkpoint.pth.tar
Epoch: [2][0/5] Time 216.540 (216.540) Data 160.007 (160.007) Loss 5.5376e-01 (5.5376e-01) Acc@1 87.50 ( 87.50) Acc@5 100.00 (100.00)
Epoch: [2] completed, elapsed time 311.848 seconds
Test: [0/2] Time 160.084 (160.084) Loss 1.1682e+06 (1.1682e+06) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
* Acc@1 33.333 Acc@5 100.000
saved checkpoint to: models/pens/checkpoint.pth.tar
Epoch: [3][0/5] Time 379.789 (379.789) Data 169.704 (169.704) Loss 1.0244e+01 (1.0244e+01) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
Epoch: [3] completed, elapsed time 413.319 seconds
Test: [0/2] Time 172.943 (172.943) Loss 1.6115e+05 (1.6115e+05) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
* Acc@1 33.333 Acc@5 100.000
saved checkpoint to: models/pens/checkpoint.pth.tar
Epoch: [4][0/5] Time 279.749 (279.749) Data 215.816 (215.816) Loss 1.4259e+01 (1.4259e+01) Acc@1 12.50 ( 12.50) Acc@5 100.00 (100.00)
Epoch: [4] completed, elapsed time 366.124 seconds
Test: [0/2] Time 209.394 (209.394) Loss 4.7580e+03 (4.7580e+03) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
* Acc@1 33.333 Acc@5 100.000
saved checkpoint to: models/pens/checkpoint.pth.tar
iamai2021@iamai2021:~/Desktop/scifair/image_classification_demo/jetson-inference/python/training/classification$
```
Training真的要好久...我上面那些共跑了 55分鐘 47.89秒,好久...
#### **== 將模型轉換為 ONNX (Converting the Model to ONNX) ==**
$ python3 onnx_export.py --model-dir=models/<YOUR-MODEL>
>ex. `$ python3 onnx_export.py --model-dir=models/pens`
#### **== 使用 TensorRT 處理圖像 Processing Images with TensorRT ==**
NET=models/ < YOUR-MODEL >
DATASET=data/ < YOUR-DATASET >
imagenet.py --model= $NET /resnet18.onnx --input_blob=input_0 --output_blob=output_0 --labels= $DATASET /labels.txt /dev/video0
>ex.
`NET=models/pens`
`DATASET=data/pens`
`imagenet.py --model=models/pens/resnet18.onnx --input_blob=input_0 --output_blob=output_0 --labels=data/pens/labels.txt /dev/video0`
---
### **==== 途中遇到問題 ====**
:::warning
**Sad...**
然後我就遇到問題了...
跑到最後出現
Traceback (most recent call last):
File "/usr/local/bin/imagenet.py", line 68, in <module>
class_id, confidence = net.Classify(img)
Exception: jetson.inference -- imageNet.Classify() encountered an error classifying the image
然後...
[gstreamer] gstCamera -- stopping pipeline, transitioning to GST_STATE_NULL
[gstreamer] gstCamera -- pipeline stopped
相機也不理我了...
我看網路上有人這樣->
>I was copying the resnet18.onnx file to another directory and giving this new path as input argument to the imagenet-console command.
>I tried with giving the path “/home/jetbot/jetson-inference/python/training/classification/cat_dog” and it worked. Feeling good it worked.
>--- by https://forums.developer.nvidia.com/t/jetson-inference-magenet-classify-encountered-an-error/110288
但是我看不懂他是什麼意思...
我嘗試了以上的方法,把`cat_dog`中的`resnet18.onnx`放進
`Desktop/scifair/image_classification_demo/jetson-inference/build/aarch64/bin/networks`(存放network的地方)裡面,但是結果還是一樣。
:::
---
### **==== (重新嘗試)收集您自己的分類數據集 Collecting your own Classification Datasets ====**
###### tags: `2021/07/21`
在`jetson-inference/python/training/classification/data`下創建一個資料夾`chicks`,在`chicks`下建立一個txt檔`labels.txt`存類別,在txt檔中按英文排序列出類別:`background` `bee` `france` `honey` `mushroom` `popo`。

完成後,即會有以下結構
‣ train/
• background/
• bee/
• france/
• honey/
• mushroom/
• popo/
‣ val/
• background/
• bee/
• france/
• honey/
• mushroom/
• popo/
‣ test/
• background/
• bee/
• france/
• honey/
• mushroom/
• popo/
#### **== 啟動工具 Launching the Tool ==**
回到`jetson-inference`,進入`tools` ,輸入
$ camera-capture /dev/video0
#### **== 收集數據 Collecting Data ==**
**= 控制圖框 =**

>Dataset Path -> 圖像儲存位置
>Class Labels -> 類別抓取位置
>Current Set -> 當前收集的圖檔作用 `train` or `val` or `test`(一般比例為 4:1:1)
>Current Class -> 當前收集的類別
>Capture -> 按下Capture鍵以拍攝照片(或是按空白鍵)
#### **== 訓練你的模型 Training your Model ==**
進入`jetson-inference/python/training/classification`
$ python3 train.py --model-dir=models/chicks_models data/chicks
>`--epochs` 更改要運行的紀元數(默認為 35)
>`--arch` 更改要訓練的網絡模型(默認為 ResNet-18)
>`--batch` 更改每個批次的大小(默認為 8)
```javascript
iamai2021@iamai2021:~/Desktop/scifair/image_classification_demo/jetson-inference/python/training/classification$ python3 train.py --model-dir=models/chicks_models data/chicks
Use GPU: 0 for training
=> dataset classes: 6 ['background', 'bee', 'france', 'honey', 'mushroom', 'popo']
=> using pre-trained model 'resnet18'
=> reshaped ResNet fully-connected layer with: Linear(in_features=512, out_features=6, bias=True)
Epoch: [0][0/8] Time 297.588 (297.588) Data 18.377 (18.377) Loss 1.9560e+00 (1.9560e+00) Acc@1 25.00 ( 25.00) Acc@5 87.50 ( 87.50)
Epoch: [0] completed, elapsed time 577.589 seconds
Test: [0/3] Time 3.165 ( 3.165) Loss 2.7332e+11 (2.7332e+11) Acc@1 0.00 ( 0.00) Acc@5 50.00 ( 50.00)
* Acc@1 16.667 Acc@5 83.333
saved best model to: models/chicks_models/model_best.pth.tar
Epoch: [1][0/8] Time 13.456 (13.456) Data 2.750 ( 2.750) Loss 1.4724e+01 (1.4724e+01) Acc@1 25.00 ( 25.00) Acc@5 87.50 ( 87.50)
Epoch: [1] completed, elapsed time 29.270 seconds
Test: [0/3] Time 20.264 (20.264) Loss 4.7513e+06 (4.7513e+06) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
* Acc@1 16.667 Acc@5 83.333
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [2][0/8] Time 23.868 (23.868) Data 20.209 (20.209) Loss 1.0573e+01 (1.0573e+01) Acc@1 25.00 ( 25.00) Acc@5 87.50 ( 87.50)
Epoch: [2] completed, elapsed time 30.615 seconds
Test: [0/3] Time 15.582 (15.582) Loss 1.8346e+05 (1.8346e+05) Acc@1 0.00 ( 0.00) Acc@5 100.00 (100.00)
* Acc@1 16.667 Acc@5 83.333
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [3][0/8] Time 29.524 (29.524) Data 26.344 (26.344) Loss 2.5671e+00 (2.5671e+00) Acc@1 37.50 ( 37.50) Acc@5 100.00 (100.00)
Epoch: [3] completed, elapsed time 41.233 seconds
Test: [0/3] Time 26.355 (26.355) Loss 4.1691e+04 (4.1691e+04) Acc@1 0.00 ( 0.00) Acc@5 50.00 ( 50.00)
* Acc@1 16.667 Acc@5 83.333
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [4][0/8] Time 73.691 (73.691) Data 48.400 (48.400) Loss 1.4946e+01 (1.4946e+01) Acc@1 12.50 ( 12.50) Acc@5 87.50 ( 87.50)
Epoch: [4] completed, elapsed time 100.660 seconds
Test: [0/3] Time 20.115 (20.115) Loss 1.8143e+02 (1.8143e+02) Acc@1 50.00 ( 50.00) Acc@5 50.00 ( 50.00)
* Acc@1 16.667 Acc@5 83.333
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [5][0/8] Time 48.491 (48.491) Data 28.685 (28.685) Loss 3.6483e+00 (3.6483e+00) Acc@1 25.00 ( 25.00) Acc@5 87.50 ( 87.50)
Epoch: [5] completed, elapsed time 54.773 seconds
Test: [0/3] Time 27.537 (27.537) Loss 8.0190e-01 (8.0190e-01) Acc@1 87.50 ( 87.50) Acc@5 100.00 (100.00)
* Acc@1 33.333 Acc@5 100.000
saved best model to: models/chicks_models/model_best.pth.tar
Epoch: [6][0/8] Time 21.118 (21.118) Data 16.219 (16.219) Loss 2.9841e+00 (2.9841e+00) Acc@1 37.50 ( 37.50) Acc@5 100.00 (100.00)
Epoch: [6] completed, elapsed time 70.954 seconds
Test: [0/3] Time 17.391 (17.391) Loss 1.9879e+00 (1.9879e+00) Acc@1 25.00 ( 25.00) Acc@5 100.00 (100.00)
* Acc@1 20.833 Acc@5 91.667
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [7][0/8] Time 25.570 (25.570) Data 10.877 (10.877) Loss 1.9348e+00 (1.9348e+00) Acc@1 12.50 ( 12.50) Acc@5 75.00 ( 75.00)
Epoch: [7] completed, elapsed time 74.578 seconds
Test: [0/3] Time 32.461 (32.461) Loss 1.3831e+00 (1.3831e+00) Acc@1 37.50 ( 37.50) Acc@5 100.00 (100.00)
* Acc@1 29.167 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [8][0/8] Time 32.023 (32.023) Data 24.985 (24.985) Loss 1.6381e+00 (1.6381e+00) Acc@1 37.50 ( 37.50) Acc@5 87.50 ( 87.50)
Epoch: [8] completed, elapsed time 59.550 seconds
Test: [0/3] Time 26.416 (26.416) Loss 2.4500e+00 (2.4500e+00) Acc@1 0.00 ( 0.00) Acc@5 100.00 (100.00)
* Acc@1 25.000 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [9][0/8] Time 28.386 (28.386) Data 21.375 (21.375) Loss 1.5477e+00 (1.5477e+00) Acc@1 37.50 ( 37.50) Acc@5 100.00 (100.00)
Epoch: [9] completed, elapsed time 83.660 seconds
Test: [0/3] Time 29.141 (29.141) Loss 2.1483e+00 (2.1483e+00) Acc@1 0.00 ( 0.00) Acc@5 50.00 ( 50.00)
* Acc@1 16.667 Acc@5 83.333
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [10][0/8] Time 34.229 (34.229) Data 24.690 (24.690) Loss 1.6080e+00 (1.6080e+00) Acc@1 25.00 ( 25.00) Acc@5 100.00 (100.00)
Epoch: [10] completed, elapsed time 136.341 seconds
Test: [0/3] Time 28.365 (28.365) Loss 1.7771e+00 (1.7771e+00) Acc@1 0.00 ( 0.00) Acc@5 100.00 (100.00)
* Acc@1 33.333 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [11][0/8] Time 22.561 (22.561) Data 16.899 (16.899) Loss 1.5739e+00 (1.5739e+00) Acc@1 25.00 ( 25.00) Acc@5 87.50 ( 87.50)
Epoch: [11] completed, elapsed time 71.627 seconds
Test: [0/3] Time 22.093 (22.093) Loss 1.2286e+00 (1.2286e+00) Acc@1 87.50 ( 87.50) Acc@5 100.00 (100.00)
* Acc@1 29.167 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [12][0/8] Time 30.734 (30.734) Data 25.245 (25.245) Loss 1.7706e+00 (1.7706e+00) Acc@1 25.00 ( 25.00) Acc@5 87.50 ( 87.50)
Epoch: [12] completed, elapsed time 65.634 seconds
Test: [0/3] Time 22.327 (22.327) Loss 1.2107e+00 (1.2107e+00) Acc@1 87.50 ( 87.50) Acc@5 100.00 (100.00)
* Acc@1 37.500 Acc@5 87.500
saved best model to: models/chicks_models/model_best.pth.tar
Epoch: [13][0/8] Time 24.264 (24.264) Data 12.941 (12.941) Loss 1.7256e+00 (1.7256e+00) Acc@1 12.50 ( 12.50) Acc@5 75.00 ( 75.00)
Epoch: [13] completed, elapsed time 65.162 seconds
Test: [0/3] Time 24.944 (24.944) Loss 1.3012e+00 (1.3012e+00) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
* Acc@1 33.333 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [14][0/8] Time 32.577 (32.577) Data 26.177 (26.177) Loss 1.2794e+00 (1.2794e+00) Acc@1 62.50 ( 62.50) Acc@5 87.50 ( 87.50)
Epoch: [14] completed, elapsed time 113.573 seconds
Test: [0/3] Time 20.763 (20.763) Loss 9.8600e-01 (9.8600e-01) Acc@1 100.00 (100.00) Acc@5 100.00 (100.00)
* Acc@1 62.500 Acc@5 95.833
saved best model to: models/chicks_models/model_best.pth.tar
Epoch: [15][0/8] Time 36.928 (36.928) Data 31.132 (31.132) Loss 1.7873e+00 (1.7873e+00) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
Epoch: [15] completed, elapsed time 72.949 seconds
Test: [0/3] Time 23.200 (23.200) Loss 1.4722e+00 (1.4722e+00) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
* Acc@1 41.667 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [16][0/8] Time 41.264 (41.264) Data 28.100 (28.100) Loss 1.4712e+00 (1.4712e+00) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
Epoch: [16] completed, elapsed time 77.358 seconds
Test: [0/3] Time 22.216 (22.216) Loss 7.9759e-01 (7.9759e-01) Acc@1 62.50 ( 62.50) Acc@5 100.00 (100.00)
* Acc@1 50.000 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [17][0/8] Time 32.203 (32.203) Data 26.703 (26.703) Loss 1.6550e+00 (1.6550e+00) Acc@1 37.50 ( 37.50) Acc@5 87.50 ( 87.50)
Epoch: [17] completed, elapsed time 87.872 seconds
Test: [0/3] Time 30.984 (30.984) Loss 8.0951e-01 (8.0951e-01) Acc@1 87.50 ( 87.50) Acc@5 100.00 (100.00)
* Acc@1 58.333 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [18][0/8] Time 22.677 (22.677) Data 10.992 (10.992) Loss 1.3273e+00 (1.3273e+00) Acc@1 37.50 ( 37.50) Acc@5 100.00 (100.00)
Epoch: [18] completed, elapsed time 66.715 seconds
Test: [0/3] Time 30.478 (30.478) Loss 9.4733e-01 (9.4733e-01) Acc@1 62.50 ( 62.50) Acc@5 100.00 (100.00)
* Acc@1 29.167 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [19][0/8] Time 22.717 (22.717) Data 17.692 (17.692) Loss 1.5426e+00 (1.5426e+00) Acc@1 37.50 ( 37.50) Acc@5 75.00 ( 75.00)
Epoch: [19] completed, elapsed time 78.844 seconds
Test: [0/3] Time 35.078 (35.078) Loss 1.8445e+00 (1.8445e+00) Acc@1 12.50 ( 12.50) Acc@5 100.00 (100.00)
* Acc@1 20.833 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [20][0/8] Time 33.586 (33.586) Data 24.707 (24.707) Loss 1.2833e+00 (1.2833e+00) Acc@1 75.00 ( 75.00) Acc@5 87.50 ( 87.50)
Epoch: [20] completed, elapsed time 58.929 seconds
Test: [0/3] Time 30.305 (30.305) Loss 1.2264e+00 (1.2264e+00) Acc@1 62.50 ( 62.50) Acc@5 100.00 (100.00)
* Acc@1 50.000 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [21][0/8] Time 51.012 (51.012) Data 31.295 (31.295) Loss 1.6103e+00 (1.6103e+00) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
Epoch: [21] completed, elapsed time 87.265 seconds
Test: [0/3] Time 34.727 (34.727) Loss 5.8154e-01 (5.8154e-01) Acc@1 100.00 (100.00) Acc@5 100.00 (100.00)
* Acc@1 45.833 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [22][0/8] Time 40.430 (40.430) Data 22.777 (22.777) Loss 1.5116e+00 (1.5116e+00) Acc@1 25.00 ( 25.00) Acc@5 87.50 ( 87.50)
Epoch: [22] completed, elapsed time 142.521 seconds
Test: [0/3] Time 26.677 (26.677) Loss 6.2675e-01 (6.2675e-01) Acc@1 100.00 (100.00) Acc@5 100.00 (100.00)
* Acc@1 66.667 Acc@5 100.000
saved best model to: models/chicks_models/model_best.pth.tar
Epoch: [23][0/8] Time 19.548 (19.548) Data 16.449 (16.449) Loss 1.6688e+00 (1.6688e+00) Acc@1 12.50 ( 12.50) Acc@5 100.00 (100.00)
Epoch: [23] completed, elapsed time 25.878 seconds
Test: [0/3] Time 33.129 (33.129) Loss 1.0612e+00 (1.0612e+00) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
* Acc@1 45.833 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [24][0/8] Time 41.201 (41.201) Data 30.550 (30.550) Loss 1.5521e+00 (1.5521e+00) Acc@1 25.00 ( 25.00) Acc@5 87.50 ( 87.50)
Epoch: [24] completed, elapsed time 81.202 seconds
Test: [0/3] Time 22.502 (22.502) Loss 7.6636e-01 (7.6636e-01) Acc@1 100.00 (100.00) Acc@5 100.00 (100.00)
* Acc@1 58.333 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [25][0/8] Time 53.155 (53.155) Data 45.844 (45.844) Loss 1.4920e+00 (1.4920e+00) Acc@1 50.00 ( 50.00) Acc@5 75.00 ( 75.00)
Epoch: [25] completed, elapsed time 129.185 seconds
Test: [0/3] Time 33.001 (33.001) Loss 9.2057e-01 (9.2057e-01) Acc@1 87.50 ( 87.50) Acc@5 100.00 (100.00)
* Acc@1 45.833 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [26][0/8] Time 64.542 (64.542) Data 42.343 (42.343) Loss 1.9873e+00 (1.9873e+00) Acc@1 12.50 ( 12.50) Acc@5 87.50 ( 87.50)
Epoch: [26] completed, elapsed time 141.561 seconds
Test: [0/3] Time 26.903 (26.903) Loss 1.0744e+00 (1.0744e+00) Acc@1 75.00 ( 75.00) Acc@5 100.00 (100.00)
* Acc@1 33.333 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [27][0/8] Time 34.075 (34.075) Data 18.700 (18.700) Loss 1.5506e+00 (1.5506e+00) Acc@1 37.50 ( 37.50) Acc@5 87.50 ( 87.50)
Epoch: [27] completed, elapsed time 82.603 seconds
Test: [0/3] Time 16.638 (16.638) Loss 1.0873e+00 (1.0873e+00) Acc@1 75.00 ( 75.00) Acc@5 100.00 (100.00)
* Acc@1 29.167 Acc@5 95.833
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [28][0/8] Time 29.859 (29.859) Data 20.722 (20.722) Loss 1.5450e+00 (1.5450e+00) Acc@1 25.00 ( 25.00) Acc@5 100.00 (100.00)
Epoch: [28] completed, elapsed time 143.255 seconds
Test: [0/3] Time 25.057 (25.057) Loss 1.2953e+00 (1.2953e+00) Acc@1 62.50 ( 62.50) Acc@5 100.00 (100.00)
* Acc@1 37.500 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [29][0/8] Time 31.301 (31.301) Data 26.652 (26.652) Loss 1.8887e+00 (1.8887e+00) Acc@1 12.50 ( 12.50) Acc@5 87.50 ( 87.50)
Epoch: [29] completed, elapsed time 71.089 seconds
Test: [0/3] Time 23.110 (23.110) Loss 1.3385e+00 (1.3385e+00) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
* Acc@1 37.500 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [30][0/8] Time 34.214 (34.214) Data 30.676 (30.676) Loss 1.4610e+00 (1.4610e+00) Acc@1 37.50 ( 37.50) Acc@5 100.00 (100.00)
Epoch: [30] completed, elapsed time 65.960 seconds
Test: [0/3] Time 27.551 (27.551) Loss 1.3297e+00 (1.3297e+00) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
* Acc@1 37.500 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [31][0/8] Time 34.040 (34.040) Data 25.807 (25.807) Loss 1.5721e+00 (1.5721e+00) Acc@1 50.00 ( 50.00) Acc@5 75.00 ( 75.00)
Epoch: [31] completed, elapsed time 145.821 seconds
Test: [0/3] Time 23.523 (23.523) Loss 1.3105e+00 (1.3105e+00) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
* Acc@1 41.667 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [32][0/8] Time 29.081 (29.081) Data 19.534 (19.534) Loss 1.4904e+00 (1.4904e+00) Acc@1 50.00 ( 50.00) Acc@5 87.50 ( 87.50)
Epoch: [32] completed, elapsed time 78.243 seconds
Test: [0/3] Time 24.195 (24.195) Loss 1.3121e+00 (1.3121e+00) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
* Acc@1 41.667 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [33][0/8] Time 61.434 (61.434) Data 43.382 (43.382) Loss 1.5462e+00 (1.5462e+00) Acc@1 37.50 ( 37.50) Acc@5 100.00 (100.00)
Epoch: [33] completed, elapsed time 204.705 seconds
Test: [0/3] Time 40.813 (40.813) Loss 1.2817e+00 (1.2817e+00) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
* Acc@1 45.833 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
Epoch: [34][0/8] Time 34.673 (34.673) Data 28.046 (28.046) Loss 1.5131e+00 (1.5131e+00) Acc@1 50.00 ( 50.00) Acc@5 100.00 (100.00)
Epoch: [34] completed, elapsed time 89.279 seconds
Test: [0/3] Time 20.754 (20.754) Loss 1.2191e+00 (1.2191e+00) Acc@1 25.00 ( 25.00) Acc@5 100.00 (100.00)
* Acc@1 45.833 Acc@5 100.000
saved checkpoint to: models/chicks_models/checkpoint.pth.tar
iamai2021@iamai2021:~/Desktop/scifair/image_classification_demo/jetson-inference/python/training/classification$
```
>共耗時1小時22分鐘45秒
#### **== 將模型轉換為 ONNX (Converting the Model to ONNX) ==**
$ python3 onnx_export.py --model-dir=models/chicks_models
>共耗時7分鐘32秒
>此時的資料夾結構
>```javascript
>|- jetson-inference\python\training\classification
> \data
> \chicks
> \test
> \background
> \...
> \train
> \background
> \...
> \val
> \background
> \...
> \labels.txt
> \models
> \chicks_models
> \checkpoint.pth.tar
> \model_best.pth.tar
> \resnet18.onnx
> \resnet18.onnx.1.1.7103.GPU.FP16.engine
> \onnx_export.py
> \train.py
>```
#### **== 使用 TensorRT 處理圖像 Processing Images with TensorRT ==**
$ imagenet.py --model=models/chicks_models/resnet18.onnx --input_blob=input_0 --output_blob=output_0 --labels=data/chicks/labels.txt /dev/video0
>然後我就成功了!!!!!
:::info
**總結失敗的點**:前後我跟改了兩個點
一個是**類別數**(前:2個;後:6個)
一個是**紀元數epochs**(前:--epochs=5;後:--epochs=35(預設))。
我想到可能的點是:**只訓練5次,可能導致models無法判斷類別。**
(或是網路上,把`resnet18.onnx`移至`Desktop/scifair/image_classification_demo/jetson-inference/build/aarch64/bin/networks`(存放network的地方)的方法起作用了)
:::
#### **== 跑出之文字結果 ==**
```javascript
iamai2021@iamai2021:~/Desktop/scifair/image_classification_demo/jetson-inference/python/training/classification$ imagenet.py --model=models/chicks_models/resnet18.onnx --input_blob=input_0 --output_blob=output_0 --labels=data/chicks/labels.txt /dev/video0
jetson.inference -- imageNet loading network using argv command line params
imageNet -- loading classification network model from:
-- prototxt (null)
-- model models/chicks_models/resnet18.onnx
-- class_labels data/chicks/labels.txt
-- input_blob 'input_0'
-- output_blob 'output_0'
-- batch_size 1
[TRT] TensorRT version 7.1.3
[TRT] loading NVIDIA plugins...
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] detected model format - ONNX (extension '.onnx')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file models/chicks_models/resnet18.onnx.1.1.7103.GPU.FP16.engine
[TRT] loading network plan from engine cache... models/chicks_models/resnet18.onnx.1.1.7103.GPU.FP16.engine
[TRT] device GPU, loaded models/chicks_models/resnet18.onnx
[TRT] Deserialize required 11418745 microseconds.
[TRT]
[TRT] CUDA engine context initialized on device GPU:
[TRT] -- layers 29
[TRT] -- maxBatchSize 1
[TRT] -- workspace 0
[TRT] -- deviceMemory 29827072
[TRT] -- bindings 2
[TRT] binding 0
-- index 0
-- name 'input_0'
-- type FP32
-- in/out INPUT
-- # dims 4
-- dim #0 1 (SPATIAL)
-- dim #1 3 (SPATIAL)
-- dim #2 224 (SPATIAL)
-- dim #3 224 (SPATIAL)
[TRT] binding 1
-- index 1
-- name 'output_0'
-- type FP32
-- in/out OUTPUT
-- # dims 2
-- dim #0 1 (SPATIAL)
-- dim #1 6 (SPATIAL)
[TRT]
[TRT] binding to input 0 input_0 binding index: 0
[TRT] binding to input 0 input_0 dims (b=1 c=3 h=224 w=224) size=602112
[TRT] binding to output 0 output_0 binding index: 1
[TRT] binding to output 0 output_0 dims (b=1 c=6 h=1 w=1) size=24
[TRT]
[TRT] device GPU, models/chicks_models/resnet18.onnx initialized.
[TRT] imageNet -- loaded 6 class info entries
[TRT] imageNet -- models/chicks_models/resnet18.onnx initialized.
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera -- attempting to create device v4l2:///dev/video0
[gstreamer] gstCamera -- found v4l2 device: UVC Camera (046d:0825)
[gstreamer] v4l2-proplist, device.path=(string)/dev/video0, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"UVC\ Camera\ \(046d:0825\)", v4l2.device.bus_info=(string)usb-70090000.xusb-2, v4l2.device.version=(uint)264649, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera -- found 38 caps for v4l2 device /dev/video0
[gstreamer] [0] video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)960, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 15/2, 5/1 };
[gstreamer] [1] video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 15/2, 5/1 };
[gstreamer] [2] video/x-raw, format=(string)YUY2, width=(int)1184, height=(int)656, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 10/1, 5/1 };
[gstreamer] [3] video/x-raw, format=(string)YUY2, width=(int)960, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 10/1, 5/1 };
[gstreamer] [4] video/x-raw, format=(string)YUY2, width=(int)1024, height=(int)576, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 10/1, 5/1 };
[gstreamer] [5] video/x-raw, format=(string)YUY2, width=(int)960, height=(int)544, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 15/1, 10/1, 5/1 };
[gstreamer] [6] video/x-raw, format=(string)YUY2, width=(int)800, height=(int)600, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [7] video/x-raw, format=(string)YUY2, width=(int)864, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [8] video/x-raw, format=(string)YUY2, width=(int)800, height=(int)448, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [9] video/x-raw, format=(string)YUY2, width=(int)752, height=(int)416, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [10] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [11] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)360, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [12] video/x-raw, format=(string)YUY2, width=(int)544, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [13] video/x-raw, format=(string)YUY2, width=(int)432, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [14] video/x-raw, format=(string)YUY2, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [15] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [16] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)176, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [17] video/x-raw, format=(string)YUY2, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [18] video/x-raw, format=(string)YUY2, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [19] image/jpeg, width=(int)1280, height=(int)960, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [20] image/jpeg, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [21] image/jpeg, width=(int)1184, height=(int)656, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [22] image/jpeg, width=(int)960, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [23] image/jpeg, width=(int)1024, height=(int)576, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [24] image/jpeg, width=(int)960, height=(int)544, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [25] image/jpeg, width=(int)800, height=(int)600, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [26] image/jpeg, width=(int)864, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [27] image/jpeg, width=(int)800, height=(int)448, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [28] image/jpeg, width=(int)752, height=(int)416, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [29] image/jpeg, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [30] image/jpeg, width=(int)640, height=(int)360, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [31] image/jpeg, width=(int)544, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [32] image/jpeg, width=(int)432, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [33] image/jpeg, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [34] image/jpeg, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [35] image/jpeg, width=(int)320, height=(int)176, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [36] image/jpeg, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [37] image/jpeg, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] gstCamera -- selected device profile: codec=mjpeg format=unknown width=1280 height=720
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video0 ! image/jpeg, width=(int)1280, height=(int)720 ! jpegdec ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully created device v4l2:///dev/video0
[video] created gstCamera from v4l2:///dev/video0
------------------------------------------------
gstCamera video options:
------------------------------------------------
-- URI: v4l2:///dev/video0
- protocol: v4l2
- location: /dev/video0
-- deviceType: v4l2
-- ioType: input
-- codec: mjpeg
-- width: 1280
-- height: 720
-- frameRate: 30.000000
-- bitRate: 0
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: none
-- loop: 0
-- rtspLatency 2000
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution: 1920x1080
[OpenGL] glDisplay -- X window resolution: 1920x1080
[OpenGL] glDisplay -- display device initialized (1920x1080)
[video] created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
-- URI: display://0
- protocol: display
- location: 0
-- deviceType: display
-- ioType: output
-- codec: raw
-- width: 1920
-- height: 1080
-- frameRate: 0.000000
-- bitRate: 0
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: none
-- loop: 0
-- rtspLatency 2000
------------------------------------------------
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> jpegdec0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> jpegdec0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> jpegdec0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstCamera -- onPreroll
[gstreamer] gstCamera -- map buffer size was less than max size (1382400 vs 1382407)
[gstreamer] gstCamera recieve caps: video/x-raw, format=(string)I420, width=(int)1280, height=(int)720, interlace-mode=(string)progressive, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2, colorimetry=(string)1:4:0:0, framerate=(fraction)30/1
[gstreamer] gstCamera -- recieved first frame, codec=mjpeg format=i420 width=1280 height=720 size=1382407
RingBuffer -- allocated 4 buffers (1382407 bytes each, 5529628 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
RingBuffer -- allocated 4 buffers (2764800 bytes each, 11059200 bytes total)
class 0000 - 0.315584 (background)
class 0001 - 0.032628 (bee)
class 0002 - 0.259910 (france)
class 0003 - 0.113698 (honey)
class 0004 - 0.178024 (mushroom)
class 0005 - 0.100155 (popo)
[OpenGL] glDisplay -- set the window size to 1280x720
[OpenGL] creating 1280x720 texture (GL_RGB8 format, 2764800 bytes)
[cuda] registered openGL texture for interop access (1280x720, GL_RGB8, 2764800 bytes)
[TRT] ------------------------------------------------
[TRT] Timing Report models/chicks_models/resnet18.onnx
[TRT] ------------------------------------------------
[TRT] Pre-Process CPU 0.13709ms CUDA 0.72516ms
[TRT] Network CPU 644.57990ms CUDA 643.86420ms
[TRT] Post-Process CPU 0.21199ms CUDA 0.34604ms
[TRT] Total CPU 644.92896ms CUDA 644.93542ms
[TRT] ------------------------------------------------
[TRT] note -- when processing a single image, run 'sudo jetson_clocks' before to disable DVFS for more accurate profiling/timing measurements
class 0000 - 0.334205 (background)
class 0001 - 0.026810 (bee)
class 0002 - 0.262249 (france)
class 0003 - 0.111887 (honey)
class 0004 - 0.174356 (mushroom)
class 0005 - 0.090494 (popo)
```
>class 0000 - 0.334205 (background) -> **WIN**
>class 0001 - 0.026810 (bee)
>class 0002 - 0.262249 (france)
>class 0003 - 0.111887 (honey)
>class 0004 - 0.174356 (mushroom)
>class 0005 - 0.090494 (popo)
>由上數據可發現,辨識類別是取各類**相似百分比最高者**
#### **== 跑出之影像結果 ==**
>**= 辨識成功圖示 (成功辨識出它是破破(popo)) =**

**= 辨識失敗圖示 (辨識失敗,它是蜜蜂(bee)) =**

因為我train(10張),val(4張),test(4張),圖檔較少,導致**信心度**皆不高。