# **Implenment YOLO V4 on PYNQ-Z2**

## 1.Installation and Darknet Setup
### Step 0 Download script folder
>https://drive.google.com/drive/folders/1iCXj49506T2gT8xLs0uYqAe5ZBVnIgfQ?usp=sharing
### Step 1 Open VMware and install ubuntu on VM
#### VM environment Info
- Software : VMware workstation 16
- OS : Ubuntu 18.04
- System resource : 16G RAM, 100G disk space
#### VMware installation URL
> https://www.vmware.com/latam/products/workstation-player/workstation-player-evaluation.html
#### Ubuntu installation URL
> https://releases.ubuntu.com/18.04.5/
### Step2 Download Darknet and Vitis AI
#### Download Darknet
$git clone https://github.com/AlexeyAB/darknet
#### Downlaod Vitis AI
##### Install docker
##### Install Vitis
git clone --recurse-submodules https://github.com/Xilinx/Vitis-AI
cd Vitis-AI
### Step3 Download DNNDK package
#### DNNDK package installation URL
>https://drive.google.com/file/d/13Ri9_tnyc-B0gVqGqUyLzUprSrOrFovR/view?usp=sharing
#### Extract DNNDK package
$tar -xzvf xilinx_dnndk_v3.1_190809.tar.gz

#### Install DNNDK Dependency
$chmod -R 777 script
$cd script
$source DNNDK_Installation.sh
#### Switch path
$cd ~/dnndkv3.1/host_x864
#### modify install.sh line 49~51
$nano ./install.sh

#### install DNNDK
$./install.sh

#### Set up Jupyterlab
$jupyter lab --generate-config
$ipython
<font color="green"> In[1] </font> : from notebook.auth import passwd
<font color="green"> In[2] </font> : passwd()
Enter password: xilinx
Verify password: xilinx
##### Your password

gedit .jupyter/jupyter_lab_config.py
#### Add the following at the end
c.NotebookApp.ip = '*'
c.NotebookApp.password = u'your password'
c.NotebookApp.open_browser = False
c.NotebookApp.port = 5000
c.NotebookApp.allow_root = True

## 2.Darknet Model Conversion to TensorFLow
### Download Vitis Tutorials Folder
$cd Vitis-AI-v1.2
$git clone https://github.com/Xilinx/Vitis-Tutorials.git
### To convert to TensorFlow you will also need the following repository:
$cd Vitis-AI-Tutorials/Design_Tutorials/07-yolov4-tutorial
$git clone https://github.com/david8862/keras-YOLOv3-model-set
### Convert Model
$cd Vitis-AI-v1.2/Vitis-AI-Tutorials/Design_Tutorials/07-yolov4-tutorial/scripts
$./convert_yolov4.sh

## 3.Model Quantization and Compilation
$cd Vitis-AI
$./docker_run.sh xilinx/vitis-ai-gpu:version
$conda activate vitis-ai-tensorflow
$cd Vitis-AI-Tutorials/Design_Tutorials/07-yolov4-tutorial/scripts
$./quantize_yolov4.sh

$exit
## 4.Model Deployment on PYNQ-Z2
### Get the .pb file and put it in pynq-z2/quantized_model
#### Location of your .pb file

#### Put .pb file into pynq-z2/quantized_model

### Compile to get the .elf model
$cd script/pynq-z2
$./pynq-z2_compile.sh

### Your model information, record the kernel name, input node and output node in the model information

### Run YOLOV4 on PYNQ-Z2
#### Download YOLO API
Download link
#### Copy the .elf file in the script/elf folder to yolo_API and compile
$./elf_compile.sh
$make

#### Run YOLOV4
$./yolo yourpicture.jpg

#### This is the result of executing yolo on pynq-z2, you can also get the result from test.txt or detect
##### detectedYourPictureName.jpg

##### test.txt

## 5.Reference
>https://github.com/Xilinx/Vitis-Tutorials/tree/6171553db3e200de44ce669242443547fd578ce5/Machine_Learning/Design_Tutorials/07-yolov4-tutorial
>https://phoenixnap.com/kb/how-to-install-anaconda-ubuntu-18-04-or-20-04