# Valls Jetson DetectNet Setup ## <b>Characterisation : tech Specification</b> - <a href="https://www.youtube.com/watch?v=uvU8AXY1170">Jetson Fundamentals S1E1 First time Setup : Youtube</a> - B01 : 4gig hardware version is what we are using here. - Serial reference is : - B01 : 4 gb has two CSI Camera ports - B01 : 4 gb has extra USB and a HDMI, Jumper location differs. - B01 : 2 gb Version USB C replaces Barrel Jack. - Barrel Supply : 5.1v 4amp - (will it also power the screen and external Logitec camera?) - USB C Supply : 5.1v 3amp - Jetpack Version is 4.4? - Operating system Version ? - Etcher used to burn the SD card - Using an 128 gb type SD card - Wifi and Bluetooth header - Lasercut case with - Rasberry Pi Camera Module 2 (RGB) CSI slot 1 - Rasberry Pi Camera Module Noir (NiR) CSI slot 2 - Logitach HD 1080 pixel USB webcam - NGS flea Bluetooth mouse with USB dongle - KMOON bluetooth foldable keyboard (Usable once Wifi bluetooth mounted) with standard american english layout as lskdjflskdjlfk - Waveshare touch sensative screen (The screen will need to be plugged into a separate power supply) ![alt text](https://www.pbtech.co.nz/fileslib/_20200521103250_552.png) --- ## <b>Jetson Configuration</b> - Check which software version you have installed using <a href="https://forums.developer.nvidia.com/t/how-do-i-know-what-version-of-l4t-my-jetson-tk1-is-running/38893">Checking Jetson Version :</a> - JetPack 4.4 includes TensorRT 7.1 `head -n 1 /etc/nv_tegra_release1` The output for my version is as follows: `# R32 (release), REVISION: 7.4, GCID: 33514132, BOARD: t210ref, EABI: aarch64, DATE: Fri Jun 9 04:25:08 UTC 2023` ` <a href="https://developer.download.nvidia.com/assets/embedded/secure/jetson/Nano/docs/NVIDIA_Jetson_Nano_Developer_Kit_User_Guide.pdf?drFbrGVPXop1NeH5K8VpLvbktOrpx7HFA5JeIzNWnqJrJXXTg1MUcgb_VulAfvrDrvKR452pbnyS7iQMtKqDi5nooaDGK9j6_lHBLRP7TnPIMsh1MJZrYN94oTnC2--rf3No22-H9m43iei9S3rABu3NiTyi-hpNK8RigIWD5wwdXJVBFg-QLogqGHJfwCexnpF-gTLe&t=eyJscyI6InJlZiIsImxzZCI6IlJFRi1kdWNrZHVja2dvLmNvbS8ifQ==">Jetson User Guide Manual</a> <a href="https://devtalk.nvidia.com/default/board/372/jetson-projects/">Jetson Forums</a> <b>Headless Setup</b> Bash SSH protocol to log-in to the Nano remotely through usb socket. - Connect the Nano to a computer using a data cable, be careful not to confuse this with a power cable. ```bash= ssh <username>@192.168.55.1 ``` Vi text editor is already installed but you may wish to instal nano for some tasks. <a href="https://linuxize.com/post/how-to-use-nano-text-editor/">Installing Nano Text editor :</a> <b>Graphical Setup</b> - Using the screen --- <b>Linux and Software Setup</b> Creating SWAT space : To run some of the deep learning tools available, a 4 gb SWAP space of RAM is required, to check the current available space, open the Terminal and use `free -m.` The command should give a similar reference. ```bash free -m ``` The following will by a typical readout : ```bas j@j-desktop:~$ free -m total used free shared buff/cache available Mem: 3955 922 2298 25 734 2858 Swap: 1977 0 1977 ``` The next commands will be : ```bas sudo systemctl disable nvzramconfig [sudo] password for j: Removed /etc/systemd/system/multi-user.target.wants/nvzramconfig.service. sudo fallocate -l 4G /mnt/4GB.swap sudo chmod 600 /mnt/4GB.swap sudo mkswap /mnt/4GB.swap Setting up swapspace version 1, size = 4 GiB (4294963200 bytes) no label, UUID=53419e56-f69c-4158-b882-50b1baa720be sudo vi /etc/fstab ``` ```python # /etc/fstab: static file system information. # # These are the filesystems that are always mounted on boot, you can # override any of these by copying the appropriate line from this file into # /etc/fstab and tweaking it as you see fit. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> /dev/root / ext4 defaults 0 1 /mnt/4GB.swap swap swap defaults 0 0 ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ -- INSERT -- 9,37 All ``` To exit the vi editor you will need to go to a new line, press `esc` key use comand `:wq` closing the vi window. You will need then to reboot. ```bash free -m ``` The following readout will confirm the new swap space : ```python j@j-desktop:~$ free -m total used free shared buff/cache available Mem: 3955 748 2600 17 607 3039 Swap: 4095 0 4095 ``` TIP : You may need to `mount` the swap. TUTORIAL NEEDED --- <b>Reading ExFAT format external Drives</b> You may wish to read data stored on an external hard drive which is formated in ExFAT. These steps will allow you to do that. ```Bash sudo add-apt-repository universe ``` ```Bash sudo apt update ``` ``` sudo apt install exfat-fuse exfat-utils ``` --- <b>GPS Settup</b> <a href="https://www.waveshare.com/wiki/Template:SIM7600G-H_4G_for_Jetson_Nano_User_Manualhttps://www.waveshare.com/wiki/Template:SIM7600G-H_4G_for_Jetson_Nano_User_Manual">Setting up GPS Waveshare header</a> --- <b>Bluetooth and Wifi Setup</b> <a href="https://jetsonhacks.com/2019/04/08/jetson-nano-intel-wifi-and-bluetooth/">Wifi addition</a> <a href="http://www.bluez.org/">Bluez code for bluetooth</a> <a href="https://forums.developer.nvidia.com/t/how-to-use-bluetooth-on-the-jetson-nano/111280/2">Jetson Nano Bluetooth and wifi</a> <a href="https://github.com/romi/flower-power">Flower Power </a> A; typical command world be gattctl --discover or python3 flower-power-history.py list For both of the above commands, use CTR C to exit. thento download data use the following command python3 flower-power-history.py download <mac-address> <output-file> The floowing is an example python3 flower-power-history.py download a0:14:3d:7d:5f:94 ~/Documents/git/romi/flower-power/20240702_Mateus_5F94_R2 --- <b>Hello AI World : Container Setup</b> <a href="https://github.com/dusty-nv/jetson-inference/tree/master">DLI : Hello AI World Github resource</a> <b>Building Projects from Source</b> <a href="https://github.com/dusty-nv/jetson-inference/blob/master/docs/building-repo-2.md">Hello Ai World : Souce Tutorial</a> Task : Try to do this ```bash= sudo apt-get update sudo apt-get install git cmake libpython3-dev python3-numpy git clone --recursive --depth=1 https://github.com/dusty-nv/jetson-inference cd jetson-inference mkdir build cd build cmake ../ make -j$(nproc) sudo make install sudo ldconfig `` ``` <b>Building Projects from Container</b> <a href="https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md">Hello Ai World : Docker Tutorial</a> - Clone the docker to your home folder using the floowing command: - `$ git clone --recursive --depth=1 https://github.com/dusty-nv/jetson-inference` - `$ cd jetson-inference` - `$ docker/run.sh` - When you run the docker/run command for the first time it will automatically pull the correct docker version for your machine. - I should be using <a href="https://hub.docker.com/r/dustynv/jetson-inference/tags">jetson-inference:r32.7.1</a> - It will also promt you to 'download DNN Models' which will be 2.2 GB which is around 40 models. (NOTE :This may differ from the online tutorials and a prompt may not be given) - You can use the DNN Model Downloader tool with : - `CD <jetson-inference>/tools` - `./download-models.sh` - Image Clasification Models include: - AlexNet - GoogleNet - GoogleNet -12 - Resnet -18 - Resnet -50 - Resnet -101 - resnet -152 - VGG -16 - VGG -19 - Inception -v4 - Object Detection models Include: - SDD-MobileNet -v1 - SDD-MobileNet -v2 - SDD-Inception -v2 - PedNet - MultiPed - Facenet - DetectNet-COCO-Dog - DetectNet-COCO-Bottle - DetectNet-COCO-Chair - DetectNet-COCO-Airplane - Semantic Segmentation models Include: - FCN-ResNet18-Cityscapes-512x256 - FCN-ResNet18-Cityscapes-1024x512 - FCN-ResNet18-Cityscapes-2048x1024 - FCN-ResNet18-DeepScene-576x320 - FCN-ResNet18-DeepScene-864x480 - FCN-ResNet18-MHP-640x360 - FCN-ResNet18-Pascal-VOC-320x320 - FCN-ResNet18-Pascal-VOC-512x320 To speed up some processes you can use this command : ```bash= cd /path/to/your/jetson-inference/build cmake -DENABLE_NVMM=OFF ../ make sudo make install ``` TIP : Reffer to the github repo <a href="https://github.com/dusty-nv/jetson-inference/blob/master/docs/building-repo-2.md#downloading-models">Download further models</a><br> TIP : More information here about <a href="https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md#mounted-data-volumes">Mounded Data Volumes</a><br> TIP : To exit the container you can use `exit`. <b>Explanation and relationship of models</b> - <a href="https://en.wikipedia.org/wiki/PyTorch">Pytorch</a> is a Deep Learning - Resnet Deep Learning Model - CUDA is : - Detectnet - Tensor and Tensor RT - SSD MobileNET --- <b>Jupter Notebook Docker setup : </b> <a href="https://courses.nvidia.com/courses">NVIDIA Courses</a> - Make a directory on the Nano desktop for your data using the following terminal command : ``` mkdir -p ~/nvdli-data ``` Tags can"be found <a href= "https://catalog.ngc.nvidia.com/orgs/nvidia/teams/dli/containers/dli-nano-ai/tags">DLI Course Tags</a> In this case I am running the docker version with the following version tag added to the script : `v2.0.2-r32.7.1` ```=bash sudo docker run --runtime nvidia -it --rm --network host --volume ~/nvdli-data:/nvdli-nano/data --device /dev/video0 nvcr.io/nvidia/dli/dli-nano-ai:v2.0.2-r32.7.1 ``` SCRIPT : `./docker_dli_run.sh` WEB ADRESS 1 : 192.168.55.1:8888 WEB Adress 2 : http://192.168.185.174:8888 PASSWORD : dlinano --- <b>JupyterLab Project Notebooks Tutorials</b> Once logged in the JupyterLab Notebook will look like this : - Content cells are - Code cells are Each script can be run by pressing the play button at the top.<br> You can also achieve the same by pressing `Shift` and `enter`keys.<br> You can also change the code in this cell How to copy text from the cell? How to enter text UTF encoded? j_thumbs --- ## <b>Camera and Multimedia Setup</b> <b>DLI JuptyerLab :</b> <a href="https://www.youtube.com/watch?v=zsjcSapzUfU">Jetson AI Fundamentals - S1E2 - Hello Camera</a> - Connect the USB USB cam to the Nano middle USB socket. <b>Hello Ai World : V4L2 Camera</b> <a href="https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md">Hello Ai World : Camera Streaming and Multimedia</a> - Whilst in the jetson-inference docker container use the following to check your camera :`ls /dev/video*` - It should appear as :`/dev/video0` - You can then run the `video-viewer /dev/video0` - Run `video-viewer --help`for more support with inputs, outputs and codecs <b>Hello Ai World : From a video file</b> - Use the command`video-viewer file://data/carrot_01.mp4 --input-codec=h264 --input-width=1014 --input-height=480` - Remember that the full pixel dimensions from the samsung are width=3040 height=1440 - NOTE : Check whether the file is on the host or in the container ` <b>CSI Camera Setup</b> - video-viewer `csi://0` TIP : Use `Ctr z`to stop the terminal command running. --- ## <b>Classification</b> <b>Image Classification : Using DLI JupyterLab for ?</b> Open a container Run all cells - <a href="https://www.youtube.com/watch?v=rSqIvLQ8Meg">Jetson AI Fundamentals - S1E3 - Image Classification Project</a> - Close kernal and clear all outputs - this will remove all variables - It will stop the camera - It will not loose your trianing data <b>Image Regression : Using DLI JupterLab for Video </b> - Task : Obtain Coordinate System from image frames. ## <b>Object Detection :</b> <b>Object Detection from custom images</b> <a href="https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-console-2.md">Hello Ai : DetectNet Console </a> Note: The first time you run each model, TensorRT will take a few minutes to optimize the network.This optimized network file is then cached to disk, so future runs using the model will load faster. <b>Transfter learning with images</b> This means re-training using a model that preexists on new images. Task : Try this with Joseps fruits. <b>Transfter learning with video</b> <a href="https://www.youtube.com/watch?v=2XMkPW_sIGg">Hello Ai : Retraining and Custom Object Detection Models Video.</a><br> <a href="https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-ssd.md">DLI : Re-training SSD mobile net</a> - <a href="https://paperswithcode.com/dataset/coco">MSCOCO : Microsoft Common Objects in Context : </a> - <a href="https://storage.googleapis.com/openimages/web/visualizer/index.html?set=train&type=detection">Open Images library</a> - Download selected images and anotation data to container - Test the dataset size first by running the stats only - `stats only script` <b>Annotations using camera-capture tool from Video</b> <a href="https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-collect-detection.md">Hello Ai : Pytorch Collection Detection</a> - Navigate to `jetson-inference/python/training/detection/ssd#` - Run the`camera-capture` tool followed by a source : - You can point to a camera with `camera-capture /dev/video0` - To loop the video use `--loop=-1` - You can point to a video file using including codecs and loop. ``` camera-capture file://data/carrot_00.mp4 --input-codec=h264 --input-width=1014 --input-height=480 --loop=-1 ``` - The 'Data Capture Control'Gui window will appear. - Chose the 'Detection'option, here more options will appear. - Refer to the `pytorch-collect-detection.md` linked above. Datset Path and Label Path setup : - Create a Label.txt file for the Dataset in the ssd/data folder: - Create a new folder on your host (home folder) : - `jetson-inference/python/training/detection/ssd/data` - Whilst in this folder you can create an empty `label.txt` file - TIP : You can use the right mouse click to go to terminal and use `touch` command and enter `label.txt`. (Check if doing this from the container or host makes a difference) - Open the text file and enter the classes you want to label.`carrot`and `weed`are mine. - NOTE : When running from a video file, navigate back to where the video is stored to run the `camera-capture` tool. - My Carrot input Video is stored here : `jetson-inference/data/carrot_00.mp4` Running the Data Capture tool and creating annotations : - In the Data Capture Control Gui - Copy the path to that folder in `dataset path` - Copy the path to the text file in `class labels`` - Check the `merge` option. This will replicate the data across train/val/test sets. - Once the GUI is running you will then be able to draw boxes around the objects you want to detect. - You can change the class in the GUI. - Once complete, save and close the GUI and the session will stop. - You can add more training data at a later point by running the `--resume` arguement ``` camera-capture file://data/carrot_00.mp4 --resume --input-codec=h264 --input-width=1014 --input-height=480 --loop=-1 ``` TIP : Try to collect equal amounts of data classes <b> Training a custom Model with Detectnet :</b> By default here pytorch writes the annotation data in pascal VOC which is viewable as an xml file. You will need to ```bash= $ cd jetson-inference/python/training/detection/ssd $ python3 train_ssd.py --dataset-type=voc --data=data/<YOUR-DATASET> --model-dir=models/<YOUR-MODEL> ``` You will need to add training information ```bash= $ python3 train_ssd.py --dataset-type=voc --data=data/weed01 --model-dir=models/weed01 --batch-size=2 --workers=1 --epochs=1 ``` If you want to add more training data you will need to retrain your model, this will start from scratch but reuse your original annotations, (Check this is true). Note that you will also need to export the new retrained model from Pytorch to onnx format again, as explained below. NOTE : Add retrain command here ```bash= $ python3 train_ssd.py --dataset-type=voc --data=data/weed01 --model-dir=models/weed01 --batch-size=2 --workers=1 --epochs=1 ``` NOTE : If you run out of memory you can try `MOUNTING swap` and `Disabling the Desktop GUI` <b> Preparing files for testing with Tensor </b> - In the same directory now export from Pytorch to onnx using : - `python3 onnx_export.py --model-dir=models/<YOUR-MODEL>` - In my case this will be as follow - `python3 onnx_export.py --model-dir=models/weed01` <b>Running detectNet with a custom model </b> - Run detectNet pointing both to your created model and the source (camera, video, image) and then to an output folder. examples : - Model Input : Load the ONYX file into Tensor RT with the camera program. - Output : "If you're using the Docker container, it's recommended to save the output images to the images/test mounted directory. These images will then be easily viewable from your host device under jetson-inference/data/images/test (for more info, see Mounted Data Volumes)" - <a href="https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-console-2.md">DetectNet Console -2</a> Command-line options - optional `--network` flag which changes the detection model being used (the default is SSD-Mobilenet-v2). - optional `--overlay` flag which can be comma-separated combinations of box, lines, labels, conf, and none - The default is `--overlay=box,labels,conf` which displays boxes, labels, and confidence values - The box option draws filled bounding boxes, while lines draws just the unfilled outlines - optional `--alpha` value which sets the alpha blending value used during overlay (the default is 120). - optional `--threshold` value which sets the minimum threshold for detection (the default is 0.5). <b>Training Test Table :</b> - High number of annotations (an) per image (im) - Separating classes in each image - An equal number of images and equal number of annotations | set | weed | carrot | size | conj | merge | Batch | Epoc | onnx | % | issue | |:------:|:--------:|:--------:|:----:|:----:|:-----:| ----- | ---- |:----:| --- |:-----:| | 00 | | | | y | y | 2 | 1 | n | 0 | 1 | | 01 | 1im=5an | 2im=10an | 3 | n | y | 2 | 1 | y | 10 | n | | 01re | 6im=31an | 5im=26an | 11 | n | y | 2 | 1 | y | 50 | n | | 01re02 | 5im=25an | 5im=25an | 21 | n | y | 2 | 1 | y | 60 | n | | 01re03 | 3im=15an | 3im=15an | 6 | n | y | 2 | 3 | y | 0 | n | | 01re04 | | | | n | y | 2 | 1 | | | | - 00 was conducted with all weeds and carrots annotated. - 01re03 with 03 epochs resulted in next to no detections. - 01re04 was conducted with annotations from sets with no added images. Issue definition list: 1. issue : Too many indices : <a href="https://github.com/dusty-nv/jetson-inference/issues/674">Git issues/674</a> 2. <b>Albumentations Team to augment dataset</b> <a href="https://github.com/albumentations-team/albumentations">Alumentations-Team</a> <b>DetectNet from a single Image</b> The following includes a network flag showing the mobilenet model, other <a href="https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-console-2.md#pre-trained-detection-models-available">pretrained models</a> are listed here. With no Network flag the default will be used. to the preloaded dectecnet models as no network or path to an file. A list of preloaded models can be found here : ``` # Python ./detectnet.py --network=ssd-mobilenet-v2 images/peds_0.jpg images/test/output.jpg # --network flag is optional ``` Included here are the flag options for my own model and the label. ``` # Python detectnet.py --model=models/weed01/ssd-mobilenet.onnx --labels=models/weed01/labels.txt images/weed01/20231005-132928.jpg images/test/20231005-132928_output.jpg ``` issue : failed to create video source device <b>DetectNet from a series of Images</b> The wildcard command "___*.jpg" will select a number of images from a source folder randomly. You can control the number of that selection by adding. ``` # Python ./detectnet.py "images/peds_*.jpg" images/test/peds_output_%i.jpg ``` ?? : What is this part : _%i <b> DetectNet from a camera (example)</b> A refference from Hello Ai : <a href="https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-camera-2.md">DetectNet Camera-2</a> ```bash= NET=models/<YOUR-MODEL> detectnet --model=$NET/ssd-mobilenet.onnx --labels=$NET/labels.txt \ --input-blob=input_0 --output-cvg=scores --output-bbox=boxes \csi://0 ``` ?? why do some detectNet commands start with ./ others with none, some with Python, others with python3?<br> The following is DetectNet from a webcam with (my code). ```bash= detectnet --model=models/weed01/ssd-mobilenet.onnx --labels=models/weed01/labels.txt --input-blob=input_0 --output-cvg=scores --output-bbox=boxes /dev/video0 ``` <b>DetectNet from a video file</b> <a href="https://forums.developer.nvidia.com/t/failed-to-load-detectnet-model/228761">forum loading from video script issue</a> ```python= ./detectnet.py pedestrians.mp4 images/test/pedestrians_ssd.mp4 ```` Added flags for threshold and changed to lines ```bash= detectnet --model=models/weed01/ssd-mobilenet.onnx --labels=models/weed01/labels.txt --input-blob=input_0 --output-cvg=scores --output-bbox=boxes --overlay=lines,labels,conf --threshold=0.6 file://data/carrot_01.mp4 --input-codec=h264 --input-width=1014 --input-height=480 ``` Adding also threshold flag --threshold and --lines Functioning : yet detection is terrible Note : The file worked when in the Python/training/data <b>Coding a custom DetectNet file</b> <a href="https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-example-2.md">Detectnet-example-2</a> ## <b>Semantic Segmentation</b> <a href="https://segment-anything.com/">Segment Everything Facebook :</a> <a href="https://github.com/facebookresearch/segment-anything">Segment Everything Github</a> --- <a href="https://courses.nvidia.com/courses/course-v1:DLI+C-IV-02+V1/about">DLI : Deep Streem Video Analytics</a> <a href="https://www.youtube.com/watch?v=z1cTzAIdXlA&list=PLWw98q-Xe7iH2zy8f7qZrmGhnheJtfXZB&index=10">Rocket Systems : Python </a> <a href="">EDGE NVIDIA TAO Toolkit :</a> <a href="https://www.youtube.com/watch?v=F-5s0bMDk5k">NVIDIA Spanish Tutorial</a> ------------- -------------