---
# System prepended metadata

title: '**2021/07/22&23**'
tags: [藍柏婷, 2021/07/22, 2021/07/23]

---

# **2021/07/22&23**
[[15 Re-training SSD-Mobilenet]](https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-ssd.md)
###### tags: `藍柏婷`
###### tags: `2021/07/22`

### **==== 重新訓練SSD-Mobilenet (Re-training SSD-Mobilenet) ====**
>*Next, we'll train our own SSD-Mobilenet object detection model using PyTorch and the Open Images dataset. SSD-Mobilenet is a popular network architecture for realtime object detection on mobile and embedded devices that combines the SSD-300 Single-Shot MultiBox Detector with a Mobilenet backbone.
接下來，我們將使用 PyTorch 和 Open Images 數據集訓練我們自己的 SSD-Mobilenet 對象檢測模型。 SSD-Mobilenet 是一種流行的網絡架構，用於移動和嵌入式設備上的實時對象檢測，它將 SSD-300 單次多盒檢測器與 Mobilenet 骨幹網相結合。
--- by dusty-nv on jetson-inference*
![](https://i.imgur.com/VrR0UvL.png)


#### **== 設置 Setup ==**
:::info
**首先確保您的Jetson和PyTorch上安裝了JetPack 4.4或更高版本，用於Python 3.6**
:::
:::info
**我們沒有使用 docker container**
:::
    # you only need to run these if you aren't using the container
    $ cd jetson-inference/python/training/detection/ssd
    $ wget https://nvidia.box.com/shared/static/djf5w54rjvpqocsiztzaandq1m3avr7c.pth -O models/mobilenet-v1-ssd-mp-0_675.pth
    $ pip3 install -v -r requirements.txt


#### **== 途中遇到問題 ==**
:::warning
```
$ cd jetson-inference/python/training/detection/ssd
$ wget https://nvidia.box.com/shared/static/djf5w54rjvpqocsiztzaandq1m3avr7c.pth -O models/mobilenet-v1-ssd-mp-0_675.pth
```
>-> 可以運行
```
$ pip3 install -v -r requirements.txt
```
>-> 一開始回顯示找不到`requirements.txt`
>
>-> 我將`requirements.txt`貼至`jetson-inference/python/training/detection/ssd`下
>
>-> 它顯示<font color="red">`Command “python setup.py egg_info” failed with error code 1 in /tmp/pip-build-Fv1UbD/torch/`</font>
>>
>>*I actually managed to resolve this issue myself, I traced the problem to a glitch with the Jetpack image after experiencing various other issues. I reformatted the SD card, flashed the Jetpack image and then rebuilt the jetson-inference project again and all the issues were resolved.*
>>*實際上，我自己設法解決了這個問題，在遇到各種其他問題後，我將問題追溯到 Jetpack 映像的故障。我重新格式化了 SD 卡，刷新了 Jetpack 映像，然後再次重建了 jetson-inference 項目，所有問題都解決了。*
>>--- by https://forums.developer.nvidia.com/t/issues-installing-pytorch-on-jetson-nano-running-jetpack-4-2/119972
:::

:::info
因為`jetson-inference/python/training/detection/ssd`中沒有任何檔案，所以要到 https://github.com/dusty-nv/pytorch-ssd (點`jetson-inference/Re-training SSD-Mobilenet`那頁，`設置 Setup`下的`jetson-inference/python/training/detection/ssd`連結)中，點`Code`、`Download ZIP`，下載完壓縮檔(ZIP)後，右鍵`解壓縮(Open With Archive Manager)`、`Extract`，選`jetson-inference/python/training/detection/ssd`，將所有檔案存到`ssd`中。
:::


#### **== 重新設置 Re-setup ==**

    $ cd jetson-inference/python/training/detection/ssd
    $ wget https://nvidia.box.com/shared/static/djf5w54rjvpqocsiztzaandq1m3avr7c.pth -O models/mobilenet-v1-ssd-mp-0_675.pth
    $ pip3 install -v -r requirements.txt
>```
>$ cd jetson-inference/python/training/detection/ssd
>```
>進入`jetson-inference/python/training/detection/ssd`。
>
>```
>$ mkdir models
>```
>在`ssd`中創建資料夾`models`。
>
>```
>$ wget https://nvidia.box.com/shared/static/djf5w54rjvpqocsiztzaandq1m3avr7c.pth -O  models/mobilenet-v1-ssd-mp-0_675.pth`
>```
>在 https://nvidia.box.com/shared/static/djf5w54rjvpqocsiztzaandq1m3avr7c.pth 中，下載`mobilenet-v1-ssd-mp-0_675.pth`，存進`models`資料夾中。
>
>```
>$ pip3 install -v -r requirements.txt
>```
>註：`$ pip3 install -v -r requirements.txt`要跑個20多分鐘，請耐心等候!

---

#### **== 下載數據 Downloading the Data ==**

    $ python3 open_images_downloader.py --class-names " Apple,Orange,Banana,Strawberry,Grape,Pear,Pineapple,Watermelon " --data=data/fruit 
>註：`$ python3 open_images_downloader.py --class-names " Apple,Orange,Banana,Strawberry,Grape,Pear,Pineapple,Watermelon " --data=data/fruit`要跑很久，請耐心等候!


#### **== 途中遇到問題 ==**
:::warning
很不幸的，我又遇到問題了!
```javascript
iamai2021@iamai2021:~/Desktop/scifair/image_classification_demo/jetson-inference/python/training/detection/ssd$ python3 open_images_downloader.py --class-names "Apple,Orange,Banana,Strawberry,Grape,Pear,Pineapple,Watermelon" --data=data/fruit
2021-07-22 20:02:48 - Requested 8 classes, found 8 classes
2021-07-22 20:02:48 - Read annotation file data/fruit/train-annotations-bbox.csv
2021-07-22 20:15:52 - Available train images:  5145
2021-07-22 20:15:52 - Available train boxes:   23539

Traceback (most recent call last):
  File "open_images_downloader.py", line 138, in <module>
    os.makedirs(image_dir, exist_ok=True)
  File "/usr/lib/python3.6/os.py", line 220, in makedirs
    mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: 'data/fruit/validation'
```
大致上就是我沒有`data`的權限。
:::
:::info
（我查看了`data`的內容，發現它的擁有者是 "root"，也就是說我最初是有用過`docker container`的。所以接下來我如果要使用到`ssd/data`的話，就要進入`docker container`。）
:::


#### **== 進入docker container ==**

###### tags: `2021/07/23`

    $ docker/run.sh
輸入密碼後，出現`#`代表進入`docker container`了。
```javascript
iamai2021@iamai2021:~/Desktop/scifair/image_classification_demo/jetson-inference$ docker/run.sh
reading L4T version from /etc/nv_tegra_release
L4T BSP Version:  L4T R32.5.1
[sudo] password for iamai2021: 
size of data/networks:  696577370 bytes
CONTAINER:     dustynv/jetson-inference:r32.5.0
DATA_VOLUME:   --volume /home/iamai2021/Desktop/scifair/image_classification_demo/jetson-inference/data:/jetson-inference/data --volume /home/iamai2021/Desktop/scifair/image_classification_demo/jetson-inference/python/training/classification/data:/jetson-inference/python/training/classification/data --volume /home/iamai2021/Desktop/scifair/image_classification_demo/jetson-inference/python/training/classification/models:/jetson-inference/python/training/classification/models --volume /home/iamai2021/Desktop/scifair/image_classification_demo/jetson-inference/python/training/detection/ssd/data:/jetson-inference/python/training/detection/ssd/data --volume /home/iamai2021/Desktop/scifair/image_classification_demo/jetson-inference/python/training/detection/ssd/models:/jetson-inference/python/training/detection/ssd/models
USER_VOLUME:   
USER_COMMAND:  
V4L2_DEVICES:    --device /dev/video0 
localuser:root being added to access control list
```


#### **== 重新下載數據 Downloading the Data Again ==**
    # python3 open_images_downloader.py --class-names " Apple,Orange,Banana,Strawberry,Grape,Pear,Pineapple,Watermelon " --data=data/fruit 
>註：`# python3 open_images_downloader.py --class-names " Apple,Orange,Banana,Strawberry,Grape,Pear,Pineapple,Watermelon " --data=data/fruit`要跑很久，請耐心等候!
>>註：下載完後可以直接關閉 stell，再另開一個新的 stell。

>**`--data=<PATH>`：改變下載目錄**
>(預設：jetson->inference/python/training/detection/ssd/data)
>**`--stats-only`：顯示該類別圖像數量**
>
>ex.
>```javascript
>...
>2020-07-09 16:18:06 - Total available images: 6360
>2020-07-09 16:18:06 - Total available boxes:  27188
>
>-------------------------------------
> 'train' set statistics
>-------------------------------------
>  Image count:  5145
>  Bounding box count:  23539
>  Bounding box distribution:
>    Strawberry:  7553/23539 = 0.32
>    Orange:  6186/23539 = 0.26
>    Apple:  3622/23539 = 0.15
>    Grape:  2560/23539 = 0.11
>    Banana:  1574/23539 = 0.07
>    Pear:  757/23539 = 0.03
>    Watermelon:  753/23539 = 0.03
>    Pineapple:  534/23539 = 0.02
>
>...
>
>-------------------------------------
> Overall statistics
>-------------------------------------
>  Image count:  6360
>  Bounding box count:  27188
>```
>**`--max-images=<number>`：將總數據量限制為指定數量的圖像，同時保持每個類別的圖像分佈與原始數據量大致相同。如果一個類別的圖像比另一個類別多，則比例將大致保持不變。**
>ex.
>```javascript
>$ python3 open_images_downloader.py --max-images=2500 --class-names "Apple,Orange,Banana,Strawberry,Grape,Pear,Pineapple,Watermelon" --data=data/fruit
>```
>**`--max-annotations-per-class=<number>`：將每個類別限制指定數量上限，如果一個類別的可用數量少於該數量，則將使用它的所有數據 --- 如果原數據在各類別之間分佈不平衡，這將非常有用。**

跑出之結果 ->
```javascript
root@iamai2021:/jetson-inference# cd python/training/detection/ssd
root@iamai2021:/jetson-inference/python/training/detection/ssd# python3 open_images_downloader.py --max-images=1000 --class-names "Apple,Orange,Banana,Strawberry,Grape,Pear,Pineapple,Watermelon" --data=data/fruit
2021-07-23 07:10:42 - Requested 8 classes, found 8 classes
2021-07-23 07:10:42 - Read annotation file data/fruit/train-annotations-bbox.csv
2021-07-23 07:23:27 - Available train images:  5145
2021-07-23 07:23:27 - Available train boxes:   23539

2021-07-23 07:23:27 - Download https://storage.googleapis.com/openimages/2018_04/validation/validation-annotations-bbox.csv.
2021-07-23 07:23:31 - Read annotation file data/fruit/validation-annotations-bbox.csv
2021-07-23 07:23:33 - Available validation images:  285
2021-07-23 07:23:33 - Available validation boxes:   825

2021-07-23 07:23:33 - Download https://storage.googleapis.com/openimages/2018_04/test/test-annotations-bbox.csv.
2021-07-23 07:23:38 - Read annotation file data/fruit/test-annotations-bbox.csv
2021-07-23 07:23:41 - Available test images:  930
2021-07-23 07:23:41 - Available test boxes:   2824

2021-07-23 07:23:41 - Total available images: 6360
2021-07-23 07:23:41 - Total available boxes:  27188

2021-07-23 07:23:41 - Limiting train dataset to:  808 images (3763 boxes)
2021-07-23 07:23:41 - Limiting validation dataset to:  44 images (163 boxes)
2021-07-23 07:23:41 - Limiting test dataset to:  146 images (340 boxes)

-------------------------------------
 'train' set statistics
-------------------------------------
  Image count:  808
  Bounding box count:  3763
  Bounding box distribution: 
    Strawberry:  1338/3763 = 0.36
    Orange:  987/3763 = 0.26
    Apple:  510/3763 = 0.14
    Grape:  427/3763 = 0.11
    Banana:  248/3763 = 0.07
    Watermelon:  111/3763 = 0.03
    Pear:  75/3763 = 0.02
    Pineapple:  67/3763 = 0.02
 

-------------------------------------
 'validation' set statistics
-------------------------------------
  Image count:  44
  Bounding box count:  163
  Bounding box distribution: 
    Strawberry:  42/163 = 0.26
    Grape:  36/163 = 0.22
    Orange:  33/163 = 0.20
    Apple:  32/163 = 0.20
    Watermelon:  7/163 = 0.04
    Banana:  6/163 = 0.04
    Pineapple:  4/163 = 0.02
    Pear:  3/163 = 0.02
 

-------------------------------------
 'test' set statistics
-------------------------------------
  Image count:  146
  Bounding box count:  340
  Bounding box distribution: 
    Orange:  103/340 = 0.30
    Grape:  92/340 = 0.27
    Strawberry:  56/340 = 0.16
    Apple:  26/340 = 0.08
    Watermelon:  25/340 = 0.07
    Banana:  17/340 = 0.05
    Pineapple:  17/340 = 0.05
    Pear:  4/340 = 0.01
 

-------------------------------------
 Overall statistics
-------------------------------------
  Image count:  998
  Bounding box count:  4266

2021-07-23 07:23:42 - Saving 'train' data to data/fruit/sub-train-annotations-bbox.csv.
2021-07-23 07:23:42 - Saving 'validation' data to data/fruit/sub-validation-annotations-bbox.csv.
2021-07-23 07:23:42 - Saving 'test' data to data/fruit/sub-test-annotations-bbox.csv.
2021-07-23 07:23:42 - Starting to download 998 images.
2021-07-23 07:23:56 - Downloaded 100 images.
2021-07-23 07:24:07 - Downloaded 200 images.
2021-07-23 07:24:21 - Downloaded 300 images.
2021-07-23 07:24:39 - Downloaded 400 images.
2021-07-23 07:24:54 - Downloaded 500 images.
2021-07-23 07:25:07 - Downloaded 600 images.
2021-07-23 07:25:22 - Downloaded 700 images.
2021-07-23 07:25:37 - Downloaded 800 images.
2021-07-23 07:25:50 - Downloaded 900 images.
2021-07-23 07:26:22 - Task Done.
root@iamai2021:/jetson-inference/python/training/detection/ssd#
```

---

#### **== 訓練 SSD-Mobilenet 模型 Training the SSD-Mobilenet Model ==**

    $ python3 train_ssd.py --data=data/fruit --model-dir=models/fruit --batch-size=4 --epochs=５
    
| Argument 引數   | Default 預設 | Description 描述     |
| --------       | -------- | --------                |
| `--data`       | data/    | 數據存的位置              |
| `--model-dir`  | models/  | 輸出訓練模型"檢查點"的目錄  |
| `--resume`     | None     | 從現有"檢查點"恢復訓練的路徑 |
| `--batch-size` | 4        | 嘗試根據可用內存增加        |
| `--epochs`     | 30       | 最多 100 個，會增加訓練時間 |
| `--workers`    | 2        | 數據加載器線程數（0 = 禁用多線程）|

:::info
注意：如果在訓練期間內存不足或進程“終止”，請嘗試`Mounting SWAP`和`Disabling the Desktop GUI`（https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-transfer-learning.md#mounting-swap）。
為了節省內存，您還可以減少`--batch-size`（默認4）和`--workers`（默認2）
:::
:::danger
千萬不要使用 `禁用桌面 GUI (Disabling the Desktop GUI)`
:::

>**訓練表現 Training Performance**
>下面是 SSD-Mobilenet 訓練性能，以幫助估計訓練所需的時間：
>|            | Images/sec |Time per epoch*|
>| -------    | --------   | -------       |
>| Nano       | 4.77       | 17 min 55 sec |
>
>註：fruits dataset (5145 training images, batch size=4)
>註：18 min x 5 epochs = 90 min（不可考，我家電腦慢的可憐）
>註：網站有 100 epochs 下載包 （存進`/ssd/models/fruit`裡面）https://nvidia.box.com/shared/static/gq0zlf0g2r258g3ldabl9o7vch18cxmi.gz

跑出之結果 ->
```javascript
iamai2021@iamai2021:~/Desktop/scifair/image_classification_demo/jetson-inference/python/training/detection/ssd$ python3 train_ssd.py --data=data/fruit --model-dir=models/fruit --batch-size=4 --epochs=5
2021-07-23 17:51:18 - Using CUDA...
2021-07-23 17:51:18 - Namespace(balance_data=False, base_net=None, base_net_lr=0.001, batch_size=4, checkpoint_folder='models/fruit', dataset_type='open_images', datasets=['data/fruit'], debug_steps=10, extra_layers_lr=None, freeze_base_net=False, freeze_net=False, gamma=0.1, lr=0.01, mb2_width_mult=1.0, milestones='80,100', momentum=0.9, net='mb1-ssd', num_epochs=5, num_workers=2, pretrained_ssd='models/mobilenet-v1-ssd-mp-0_675.pth', resume=None, scheduler='cosine', t_max=100, use_cuda=True, validation_epochs=1, weight_decay=0.0005)
2021-07-23 17:51:18 - Prepare training datasets.
2021-07-23 17:51:18 - loading annotations from: data/fruit/sub-train-annotations-bbox.csv
2021-07-23 17:51:18 - annotations loaded from:  data/fruit/sub-train-annotations-bbox.csv
num images:  808
2021-07-23 17:51:21 - Dataset Summary:Number of Images: 808
Minimum Number of Images for a Class: -1
Label Distribution:
	Apple: 510
	Banana: 248
	Grape: 427
	Orange: 987
	Pear: 75
	Pineapple: 67
	Strawberry: 1338
	Watermelon: 111
2021-07-23 17:51:21 - Stored labels into file models/fruit/labels.txt.
2021-07-23 17:51:21 - Train dataset size: 808
2021-07-23 17:51:21 - Prepare Validation datasets.
2021-07-23 17:51:21 - loading annotations from: data/fruit/sub-test-annotations-bbox.csv
2021-07-23 17:51:21 - annotations loaded from:  data/fruit/sub-test-annotations-bbox.csv
num images:  146
2021-07-23 17:51:21 - Dataset Summary:Number of Images: 146
Minimum Number of Images for a Class: -1
Label Distribution:
	Apple: 26
	Banana: 17
	Grape: 92
	Orange: 103
	Pear: 4
	Pineapple: 17
	Strawberry: 56
	Watermelon: 25
2021-07-23 17:51:21 - Validation dataset size: 146
2021-07-23 17:51:21 - Build network.
2021-07-23 17:51:21 - Init from pretrained ssd models/mobilenet-v1-ssd-mp-0_675.pth
2021-07-23 17:51:22 - Took 0.51 seconds to load the model.
2021-07-23 17:52:42 - Learning rate: 0.01, Base net learning rate: 0.001, Extra Layers learning rate: 0.01.
2021-07-23 17:52:42 - Uses CosineAnnealingLR scheduler.
2021-07-23 17:52:42 - Start training from epoch 0.
/home/iamai2021/.local/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:123: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step().  Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
  "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
/home/iamai2021/.local/lib/python3.6/site-packages/torch/nn/_reduction.py:44: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
  warnings.warn(warning.format(ret))
2021-07-23 18:07:52 - Epoch: 0, Step: 10/202, Avg Loss: 13.9509, Avg Regression Loss 4.1929, Avg Classification Loss: 9.7580
2021-07-23 18:29:33 - Epoch: 0, Step: 20/202, Avg Loss: 9.2671, Avg Regression Loss 3.7125, Avg Classification Loss: 5.5546
2021-07-23 19:02:31 - Epoch: 0, Step: 30/202, Avg Loss: 8.6253, Avg Regression Loss 3.2659, Avg Classification Loss: 5.3594
```

---

#### **== 將模型轉換為 ONNX (Converting the Model to ONNX) ==**

    $ python3 onnx_export.py --model-dir=models/fruit
>保存一個`ssd-mobilenet.onnx`的模型在`jetson-inference/python/training/detection/ssd/models/fruit/`下

---

#### **== 使用 TensorRT 處理圖像 Processing Images with TensorRT ==**
:::info
在jetson-inference/python/training/detection/ssd/中操作
:::
    $ IMAGES=/home/iamai2021/Desktop/scifair/image_classification_demo/jetson-inference/data/images
    $ detectnet --model=models/fruit/ssd-mobilenet.onnx --labels=models/fruit/labels.txt \
          --input-blob=input_0 --output-cvg=scores --output-bbox=boxes \
            "$IMAGES/fruit_<number>.jpg" $IMAGES/test/fruit_<number>.jpg

>```
>$ IMAGES=<path-to-your-jetson-inference>/data/images   # substitute your jetson-inference path here
>```
>從以下圖片可以看出`jetson-inference`的位置位於`/home/iamai2021/Desktop/scifair/image_classification_demo/jetson-inference`
>
>![](https://i.imgur.com/gvFwMtO.png)
>```
>$ detectnet --model=models/fruit/ssd-mobilenet.onnx --labels=models/fruit/labels.txt \
>          --input-blob=input_0 --output-cvg=scores --output-bbox=boxes \
>            "$IMAGES/fruit_<number>.jpg" $IMAGES/test/fruit_<number>.jpg
>```
><number>要改為0,1,2,3......等數字

跑出之結果 ->
```javascript
iamai2021@iamai2021:~/Desktop/scifair/image_classification_demo/jetson-inference/python/training/detection/ssd$ detectnet --model=models/fruit/ssd-mobilenet.onnx --labels=models/fruit/labels.txt \
          --input-blob=input_0 --output-cvg=scores --output-bbox=boxes \
            "$IMAGES/fruit_1.jpg" $IMAGES/test/fruit_1.jpg

[video]  created imageLoader from file:///home/iamai2021/Desktop/scifair/image_classification_demo/jetson-inference/data/images/fruit_1.jpg
------------------------------------------------
imageLoader video options:
------------------------------------------------
  -- URI: file:///home/iamai2021/Desktop/scifair/image_classification_demo/jetson-inference/data/images/fruit_1.jpg
     - protocol:  file
     - location:  /home/iamai2021/Desktop/scifair/image_classification_demo/jetson-inference/data/images/fruit_1.jpg
     - extension: jpg
  -- deviceType: file
  -- ioType:     input
  -- codec:      unknown
  -- width:      0
  -- height:     0
  -- frameRate:  0.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
  -- rtspLatency 2000
------------------------------------------------
[video]  created imageWriter from file:///home/iamai2021/Desktop/scifair/image_classification_demo/jetson-inference/data/images/test/fruit_1.jpg
------------------------------------------------
imageWriter video options:
------------------------------------------------
  -- URI: file:///home/iamai2021/Desktop/scifair/image_classification_demo/jetson-inference/data/images/test/fruit_1.jpg
     - protocol:  file
     - location:  /home/iamai2021/Desktop/scifair/image_classification_demo/jetson-inference/data/images/test/fruit_1.jpg
     - extension: jpg
  -- deviceType: file
  -- ioType:     output
  -- codec:      unknown
  -- width:      0
  -- height:     0
  -- frameRate:  0.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
  -- rtspLatency 2000
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution:  1920x1080
[OpenGL] glDisplay -- X window resolution:    1920x1080
[OpenGL] glDisplay -- display device initialized (1920x1080)
[video]  created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
  -- URI: display://0
     - protocol:  display
     - location:  0
  -- deviceType: display
  -- ioType:     output
  -- codec:      raw
  -- width:      1920
  -- height:     1080
  -- frameRate:  0.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
  -- rtspLatency 2000
------------------------------------------------

detectNet -- loading detection network model from:
          -- prototxt     NULL
          -- model        models/fruit/ssd-mobilenet.onnx
          -- input_blob   'input_0'
          -- output_cvg   'scores'
          -- output_bbox  'boxes'
          -- mean_pixel   0.000000
          -- mean_binary  NULL
          -- class_labels models/fruit/labels.txt
          -- threshold    0.500000
          -- batch_size   1

[TRT]    TensorRT version 7.1.3
[TRT]    loading NVIDIA plugins...
[TRT]    Registered plugin creator - ::GridAnchor_TRT version 1
[TRT]    Registered plugin creator - ::NMS_TRT version 1
[TRT]    Registered plugin creator - ::Reorg_TRT version 1
[TRT]    Registered plugin creator - ::Region_TRT version 1
[TRT]    Registered plugin creator - ::Clip_TRT version 1
[TRT]    Registered plugin creator - ::LReLU_TRT version 1
[TRT]    Registered plugin creator - ::PriorBox_TRT version 1
[TRT]    Registered plugin creator - ::Normalize_TRT version 1
[TRT]    Registered plugin creator - ::RPROI_TRT version 1
[TRT]    Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT]    Could not register plugin creator -  ::FlattenConcat_TRT version 1
[TRT]    Registered plugin creator - ::CropAndResize version 1
[TRT]    Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT]    Registered plugin creator - ::Proposal version 1
[TRT]    Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT]    Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT]    Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT]    Registered plugin creator - ::Split version 1
[TRT]    Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT]    Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT]    detected model format - ONNX  (extension '.onnx')
[TRT]    desired precision specified for GPU: FASTEST
[TRT]    requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]    native precisions detected for GPU:  FP32, FP16
[TRT]    selecting fastest native precision for GPU:  FP16
[TRT]    attempting to open engine cache file models/fruit/ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine
[TRT]    loading network plan from engine cache... models/fruit/ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine
[TRT]    device GPU, loaded models/fruit/ssd-mobilenet.onnx
[TRT]    Deserialize required 16159728 microseconds.
[TRT]    
[TRT]    CUDA engine context initialized on device GPU:
[TRT]       -- layers       107
[TRT]       -- maxBatchSize 1
[TRT]       -- workspace    0
[TRT]       -- deviceMemory 23420416
[TRT]       -- bindings     3
[TRT]       binding 0
                -- index   0
                -- name    'input_0'
                -- type    FP32
                -- in/out  INPUT
                -- # dims  4
                -- dim #0  1 (SPATIAL)
                -- dim #1  3 (SPATIAL)
                -- dim #2  300 (SPATIAL)
                -- dim #3  300 (SPATIAL)
[TRT]       binding 1
                -- index   1
                -- name    'scores'
                -- type    FP32
                -- in/out  OUTPUT
                -- # dims  3
                -- dim #0  1 (SPATIAL)
                -- dim #1  3000 (SPATIAL)
                -- dim #2  9 (SPATIAL)
[TRT]       binding 2
                -- index   2
                -- name    'boxes'
                -- type    FP32
                -- in/out  OUTPUT
                -- # dims  3
                -- dim #0  1 (SPATIAL)
                -- dim #1  3000 (SPATIAL)
                -- dim #2  4 (SPATIAL)
[TRT]    
[TRT]    binding to input 0 input_0  binding index:  0
[TRT]    binding to input 0 input_0  dims (b=1 c=3 h=300 w=300) size=1080000
[TRT]    binding to output 0 scores  binding index:  1
[TRT]    binding to output 0 scores  dims (b=1 c=3000 h=9 w=1) size=108000
[TRT]    binding to output 1 boxes  binding index:  2
[TRT]    binding to output 1 boxes  dims (b=1 c=3000 h=4 w=1) size=48000
[TRT]    
[TRT]    device GPU, models/fruit/ssd-mobilenet.onnx initialized.
[TRT]    detectNet -- number object classes:  9
[TRT]    detectNet -- maximum bounding boxes:  3000
[TRT]    detectNet -- loaded 9 class info entries
[TRT]    detectNet -- number of object classes:  9
[image] loaded '/home/iamai2021/Desktop/scifair/image_classification_demo/jetson-inference/data/images/fruit_1.jpg'  (1024x683, 3 channels)
2 objects detected
detected obj 0  class #1 (Apple)  confidence=0.966337
bounding box 0  (392.750000, 306.816406)  (745.179688, 659.201538)  w=352.429688  h=352.385132
detected obj 1  class #1 (Apple)  confidence=0.875261
bounding box 1  (244.625000, 42.770874)  (558.347961, 393.125183)  w=313.722961  h=350.354309
[OpenGL] glDisplay -- set the window size to 1024x683
[OpenGL] creating 1024x683 texture (GL_RGB8 format, 2098176 bytes)
[cuda]   registered openGL texture for interop access (1024x683, GL_RGB8, 2098176 bytes)
[image] saved '/home/iamai2021/Desktop/scifair/image_classification_demo/jetson-inference/data/images/test/fruit_1.jpg'  (1024x683, 3 channels)

[TRT]    ------------------------------------------------
[TRT]    Timing Report models/fruit/ssd-mobilenet.onnx
[TRT]    ------------------------------------------------
[TRT]    Pre-Process   CPU   0.14365ms  CUDA   3.27328ms
[TRT]    Network       CPU 351.21375ms  CUDA 347.62515ms
[TRT]    Post-Process  CPU 130.86665ms  CUDA 131.09120ms
[TRT]    Visualize     CPU 316.61752ms  CUDA 316.79739ms
[TRT]    Total         CPU 798.84155ms  CUDA 798.78699ms
[TRT]    ------------------------------------------------

[TRT]    note -- when processing a single image, run 'sudo jetson_clocks' before
                to disable DVFS for more accurate profiling/timing measurements

[image] imageLoader -- End of Stream (EOS) has been reached, stream has been closed
detectnet:  shutting down...
detectnet:  shutdown complete.
```

#### **== 靜態測試圖像結果圖示 ==**
（位於`/home/iamai2021/Desktop/scifair/image_classification_demo/jetson-inference/data/images/test`中）
![](https://i.imgur.com/xAR5t0J.jpg)



#### **== 運行實時攝像頭程序 Running the Live Camera Program ==**

    detectnet --model=models/fruit/ssd-mobilenet.onnx --labels=models/fruit/labels.txt \
              --input-blob=input_0 --output-cvg=scores --output-bbox=boxes \
                /dev/video0

跑出之結果 ->
```javascript
iamai2021@iamai2021:~/Desktop/scifair/image_classification_demo/jetson-inference/python/training/detection/ssd$ detectnet --model=models/fruit/ssd-mobilenet.onnx --labels=models/fruit/labels.txt           --input-blob=input_0 --output-cvg=scores --output-bbox=boxes             /dev/video0
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera -- attempting to create device v4l2:///dev/video0
[gstreamer] gstCamera -- found v4l2 device: UVC Camera (046d:0825)
[gstreamer] v4l2-proplist, device.path=(string)/dev/video0, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"UVC\ Camera\ \(046d:0825\)", v4l2.device.bus_info=(string)usb-70090000.xusb-2, v4l2.device.version=(uint)264649, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera -- found 38 caps for v4l2 device /dev/video0
[gstreamer] [0] video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)960, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 15/2, 5/1 };
[gstreamer] [1] video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 15/2, 5/1 };
[gstreamer] [2] video/x-raw, format=(string)YUY2, width=(int)1184, height=(int)656, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 10/1, 5/1 };
[gstreamer] [3] video/x-raw, format=(string)YUY2, width=(int)960, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 10/1, 5/1 };
[gstreamer] [4] video/x-raw, format=(string)YUY2, width=(int)1024, height=(int)576, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 10/1, 5/1 };
[gstreamer] [5] video/x-raw, format=(string)YUY2, width=(int)960, height=(int)544, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 15/1, 10/1, 5/1 };
[gstreamer] [6] video/x-raw, format=(string)YUY2, width=(int)800, height=(int)600, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [7] video/x-raw, format=(string)YUY2, width=(int)864, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [8] video/x-raw, format=(string)YUY2, width=(int)800, height=(int)448, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [9] video/x-raw, format=(string)YUY2, width=(int)752, height=(int)416, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [10] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [11] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)360, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [12] video/x-raw, format=(string)YUY2, width=(int)544, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [13] video/x-raw, format=(string)YUY2, width=(int)432, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [14] video/x-raw, format=(string)YUY2, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [15] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [16] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)176, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [17] video/x-raw, format=(string)YUY2, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [18] video/x-raw, format=(string)YUY2, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [19] image/jpeg, width=(int)1280, height=(int)960, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [20] image/jpeg, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [21] image/jpeg, width=(int)1184, height=(int)656, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [22] image/jpeg, width=(int)960, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [23] image/jpeg, width=(int)1024, height=(int)576, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [24] image/jpeg, width=(int)960, height=(int)544, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [25] image/jpeg, width=(int)800, height=(int)600, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [26] image/jpeg, width=(int)864, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [27] image/jpeg, width=(int)800, height=(int)448, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [28] image/jpeg, width=(int)752, height=(int)416, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [29] image/jpeg, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [30] image/jpeg, width=(int)640, height=(int)360, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [31] image/jpeg, width=(int)544, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [32] image/jpeg, width=(int)432, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [33] image/jpeg, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [34] image/jpeg, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [35] image/jpeg, width=(int)320, height=(int)176, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [36] image/jpeg, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [37] image/jpeg, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] gstCamera -- selected device profile:  codec=mjpeg format=unknown width=1280 height=720
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video0 ! image/jpeg, width=(int)1280, height=(int)720 ! jpegdec ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully created device v4l2:///dev/video0
[video]  created gstCamera from v4l2:///dev/video0
------------------------------------------------
gstCamera video options:
------------------------------------------------
  -- URI: v4l2:///dev/video0
     - protocol:  v4l2
     - location:  /dev/video0
  -- deviceType: v4l2
  -- ioType:     input
  -- codec:      mjpeg
  -- width:      1280
  -- height:     720
  -- frameRate:  30.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
  -- rtspLatency 2000
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution:  1920x1080
[OpenGL] glDisplay -- X window resolution:    1920x1080
[OpenGL] glDisplay -- display device initialized (1920x1080)
[video]  created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
  -- URI: display://0
     - protocol:  display
     - location:  0
  -- deviceType: display
  -- ioType:     output
  -- codec:      raw
  -- width:      1920
  -- height:     1080
  -- frameRate:  0.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
  -- rtspLatency 2000
------------------------------------------------

detectNet -- loading detection network model from:
          -- prototxt     NULL
          -- model        models/fruit/ssd-mobilenet.onnx
          -- input_blob   'input_0'
          -- output_cvg   'scores'
          -- output_bbox  'boxes'
          -- mean_pixel   0.000000
          -- mean_binary  NULL
          -- class_labels models/fruit/labels.txt
          -- threshold    0.500000
          -- batch_size   1

[TRT]    TensorRT version 7.1.3
[TRT]    loading NVIDIA plugins...
[TRT]    Registered plugin creator - ::GridAnchor_TRT version 1
[TRT]    Registered plugin creator - ::NMS_TRT version 1
[TRT]    Registered plugin creator - ::Reorg_TRT version 1
[TRT]    Registered plugin creator - ::Region_TRT version 1
[TRT]    Registered plugin creator - ::Clip_TRT version 1
[TRT]    Registered plugin creator - ::LReLU_TRT version 1
[TRT]    Registered plugin creator - ::PriorBox_TRT version 1
[TRT]    Registered plugin creator - ::Normalize_TRT version 1
[TRT]    Registered plugin creator - ::RPROI_TRT version 1
[TRT]    Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT]    Could not register plugin creator -  ::FlattenConcat_TRT version 1
[TRT]    Registered plugin creator - ::CropAndResize version 1
[TRT]    Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT]    Registered plugin creator - ::Proposal version 1
[TRT]    Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT]    Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT]    Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT]    Registered plugin creator - ::Split version 1
[TRT]    Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT]    Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT]    detected model format - ONNX  (extension '.onnx')
[TRT]    desired precision specified for GPU: FASTEST
[TRT]    requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]    native precisions detected for GPU:  FP32, FP16
[TRT]    selecting fastest native precision for GPU:  FP16
[TRT]    attempting to open engine cache file models/fruit/ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine
[TRT]    loading network plan from engine cache... models/fruit/ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine
[TRT]    device GPU, loaded models/fruit/ssd-mobilenet.onnx
[TRT]    Deserialize required 34499541 microseconds.
[TRT]    
[TRT]    CUDA engine context initialized on device GPU:
[TRT]       -- layers       107
[TRT]       -- maxBatchSize 1
[TRT]       -- workspace    0
[TRT]       -- deviceMemory 23420416
[TRT]       -- bindings     3
[TRT]       binding 0
                -- index   0
                -- name    'input_0'
                -- type    FP32
                -- in/out  INPUT
                -- # dims  4
                -- dim #0  1 (SPATIAL)
                -- dim #1  3 (SPATIAL)
                -- dim #2  300 (SPATIAL)
                -- dim #3  300 (SPATIAL)
[TRT]       binding 1
                -- index   1
                -- name    'scores'
                -- type    FP32
                -- in/out  OUTPUT
                -- # dims  3
                -- dim #0  1 (SPATIAL)
                -- dim #1  3000 (SPATIAL)
                -- dim #2  9 (SPATIAL)
[TRT]       binding 2
                -- index   2
                -- name    'boxes'
                -- type    FP32
                -- in/out  OUTPUT
                -- # dims  3
                -- dim #0  1 (SPATIAL)
                -- dim #1  3000 (SPATIAL)
                -- dim #2  4 (SPATIAL)
[TRT]    
[TRT]    binding to input 0 input_0  binding index:  0
[TRT]    binding to input 0 input_0  dims (b=1 c=3 h=300 w=300) size=1080000
[TRT]    binding to output 0 scores  binding index:  1
[TRT]    binding to output 0 scores  dims (b=1 c=3000 h=9 w=1) size=108000
[TRT]    binding to output 1 boxes  binding index:  2
[TRT]    binding to output 1 boxes  dims (b=1 c=3000 h=4 w=1) size=48000
[TRT]    
[TRT]    device GPU, models/fruit/ssd-mobilenet.onnx initialized.
[TRT]    detectNet -- number object classes:  9
[TRT]    detectNet -- maximum bounding boxes:  3000
[TRT]    detectNet -- loaded 9 class info entries
[TRT]    detectNet -- number of object classes:  9
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> jpegdec0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> jpegdec0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> jpegdec0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstreamer message stream-start ==> pipeline0
detectnet:  failed to capture video frame
detectnet:  failed to capture video frame
detectnet:  failed to capture video frame
[gstreamer] gstCamera -- onPreroll
[gstreamer] gstCamera -- map buffer size was less than max size (1382400 vs 1382407)
[gstreamer] gstCamera recieve caps:  video/x-raw, format=(string)I420, width=(int)1280, height=(int)720, interlace-mode=(string)progressive, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2, colorimetry=(string)1:4:0:0, framerate=(fraction)30/1
[gstreamer] gstCamera -- recieved first frame, codec=mjpeg format=i420 width=1280 height=720 size=1382407
RingBuffer -- allocated 4 buffers (1382407 bytes each, 5529628 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
RingBuffer -- allocated 4 buffers (2764800 bytes each, 11059200 bytes total)
[gstreamer] gstreamer message qos ==> v4l2src0
[OpenGL] glDisplay -- set the window size to 1280x720
[OpenGL] creating 1280x720 texture (GL_RGB8 format, 2764800 bytes)
[cuda]   registered openGL texture for interop access (1280x720, GL_RGB8, 2764800 bytes)

[TRT]    ------------------------------------------------
[TRT]    Timing Report models/fruit/ssd-mobilenet.onnx
[TRT]    ------------------------------------------------
[TRT]    Pre-Process   CPU 139.35304ms  CUDA 141.23817ms
[TRT]    Network       CPU 388.54294ms  CUDA 386.25436ms
[TRT]    Post-Process  CPU   3.97593ms  CUDA   4.20906ms
[TRT]    Total         CPU 531.87195ms  CUDA 531.70160ms
[TRT]    ------------------------------------------------

[TRT]    note -- when processing a single image, run 'sudo jetson_clocks' before
                to disable DVFS for more accurate profiling/timing measurements


[TRT]    ------------------------------------------------
[TRT]    Timing Report models/fruit/ssd-mobilenet.onnx
[TRT]    ------------------------------------------------
[TRT]    Pre-Process   CPU   0.05547ms  CUDA   1.56542ms
[TRT]    Network       CPU  67.45556ms  CUDA  54.54016ms
[TRT]    Post-Process  CPU   3.96697ms  CUDA   4.17838ms
[TRT]    Total         CPU  71.47800ms  CUDA  60.28396ms
[TRT]    ------------------------------------------------

...
...

[TRT]    ------------------------------------------------
[TRT]    Timing Report models/fruit/ssd-mobilenet.onnx
[TRT]    ------------------------------------------------
[TRT]    Pre-Process   CPU   0.05912ms  CUDA   2.62240ms
[TRT]    Network       CPU  40.36085ms  CUDA  27.54724ms
[TRT]    Post-Process  CPU   4.40839ms  CUDA   4.45115ms
[TRT]    Total         CPU  44.82835ms  CUDA  34.62078ms
[TRT]    ------------------------------------------------

[OpenGL] glDisplay -- the window has been closed

[TRT]    ------------------------------------------------
[TRT]    Timing Report models/fruit/ssd-mobilenet.onnx
[TRT]    ------------------------------------------------
[TRT]    Pre-Process   CPU   0.09860ms  CUDA   1.17719ms
[TRT]    Network       CPU  39.76318ms  CUDA  28.97302ms
[TRT]    Post-Process  CPU   4.35792ms  CUDA   4.40286ms
[TRT]    Total         CPU  44.21970ms  CUDA  34.55307ms
[TRT]    ------------------------------------------------

detectnet:  shutting down...
[gstreamer] gstCamera -- stopping pipeline, transitioning to GST_STATE_NULL
[gstreamer] gstCamera -- pipeline stopped
detectnet:  shutdown complete.
iamai2021@iamai2021:~/Desktop/scifair/image_classification_demo/jetson-inference/python/training/detection/ssd$
```

#### **== 實時攝像頭結果圖示 ==**
![](https://i.imgur.com/EG0PPL1.png)

（看來畫的很逼真呀!!!看來Jetson Nano也認同呢!）

![](https://i.imgur.com/aD0H0To.png)

（哈哈!這也行!）

![](https://i.imgur.com/ObiadOW.png)

（看來好吃的波羅蜜 美洲沒有...）






