---
tags: NVIDIA, NVIDIA GEFORCE RTX-3080
---
# NVIDIA RTX-3080*2 benchmark environment using and building SOP (1000 times)
###### tags: NVIDIA GEFORCE RTX-3080
## HW equipment
Mother Board: BIS-5101 with NVIDIA GEFORCE RTX3080 x 2
CPU: Intel® 5318Y * 1
RAM: 64GB SODIMM x 2
OS: Ubuntu 18.04 LTS Desktop, kernel 5.4.0-42-generic
Docker: 20.10.11
Cuda: 11.4
NVIDIA TensorRT docker image version: 21.10-py3
NVIDIA TensorFlow docker image version: 20.11-tf2-py3
## Step 1
### Install the Nvidia Driver
Download the latest stable Driver from NVIDIA offical website. URL: https://www.nvidia.com/zh-tw/geforce/drivers/
After downloading the .run file
```javascript
# chmod 777 NVIDIA-Linux-x86_64-455.38.run //notice the filename when you click command, filename may be different because of its version or others
# apt install gcc make
# ./NVIDIA-Linux-x86_64-455.38.run //notice the filename when you click command
//When you execute the .run file, click "continue install" option in every error message
# reboot
```
after reboot, command
```javascript=
nvidia-smi
```
If you install the driver successfully, it shows message like picture below:

## Step 2
### gpu_monitor script for getting the below mentioned values about GPU.
We use nvidia-smi tool to monitor temperature, power(watt), processes and frequency of gpu
I wrote a script to show those message and save messages to log in a loop
You can do test and monitor gpu value at the same time by switching tty in Linux or using terminal management tool like tmux
```javascript=
# sudo su
# nano benchmark.sh
```
Copy the bash script and save the file
```javascript=
#!/bin/bash
echo " " > ./gpu_log.txt
echo "please insert interval (sec) : "
read interval
for((i=1;i>0;i++))
do
echo -e "\n=====i : ${i}=====\n" > ./gpu_log_tmp.txt
nvidia-smi >> ./gpu_log_tmp.txt
cat ./gpu_log_tmp.txt
sleep 2
nvidia-smi -q -d CLOCK | grep -v N/A | grep -v "Not Found" >> ./gpu_log_tmp.txt
cat ./gpu_log_tmp.txt
cat ./gpu_log_tmp.txt >> ./gpu_log.txt
sleep "${interval}"
done
```
Make that file executable and run the script.
```javascript=
# chmod 777 benchmark.sh
# ./benchmark
```
Give the time interval of 1 second to montior the value in every one second.
## Step 3:
### Install the Docker
```javascript=
# sudo apt-get remove docker docker-engine docker.io containerd runc
# sudo apt-get update
# sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# sudo apt-key fingerprint 0EBFCD88
```
Make sure that the result may be:
9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88
```javascript=11
# sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
# sudo apt-get update
# sudo apt-get install docker-ce docker-ce-cli containerd.io
```
## Step 4
### Install the Nvidia Container Toolkit
```javascript=
# distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
$ sudo apt-get update
$ sudo apt-get install -y nvidia-container-toolkit
$ sudo usermod -aG docker $USER
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
```
After rebooting, type and execute
```javascript=
# docker run --gpus all nvidia/cuda:11.1-base nvidia-smi
```
If it's OK, it may show message like picture below:

## Step 5:
#### (1) download docker image
```bash=
docker pull nvcr.io/nvidia/tensorrt:20.11-py3
```
* We only have to pull docker image only one time
### Important step:
Just change the Device name and --name parameter for trt_2011 to trt_2011_gpu0 or gpu1 and run parrallel containers or both gpus. If you have more gpus can follow the same procedure to extend this.
#### (2) run the docker image on the specific gpu number e.g. --gpus '"device=0"' and --gpus '"device=1"', 0 represent to gpu 1, 1 represent to gpu 2
```bash=
docker run --gpus '"device=0"' -it --name trt_2011_gpu0 -w /workspace/tensorrt/data/resnet50/ nvcr.io/nvidia/tensorrt:20.11-py3
docker run --gpus '"device=1"' -it --name trt_2011_gpu1 -w /workspace/tensorrt/data/resnet50/ nvcr.io/nvidia/tensorrt:20.11-py3
```
* CB-1932 equip 2 NVIDIA A30,--gpus '"device=0"' or --gpus '"device=1"' means that you give GPU No.0 or No.1 to that container as its resource; --gpus all means that you give all GPU to that container as its resource. Other question, please google and key in keyword: docker run –gpu
* -it, i represents interactive, even we do not connect to that container,STDIN(terminal of UNIX) opens, too; -t presents tty,give that container a fake tty
* --rm, when we leave container, that container will be removed
* -v (volume), we use that to set a route for host and container to exchange files
* -w(workspace), the path after you enter into docker
And, we enter into container of tensorrt
#### (3) install necessary python tools and write script in docker container
```bash=
# =====it is in tensorrt container=====
/opt/tensorrt/python/python_setup.sh
cd /workspace/tensorrt/data/resnet50
vi benchmark.sh
# =====it is in tensorrt container=====
```
in benchmark.sh (This script help you to run only once)
```bash=
#!/bin/bash
# =====it is in tensorrt container=====
echo -e "for int8 test, press 1; for fp16 test, press 2 : "
read testmode
if [ "${testmode}" -eq 1 ]; then
/workspace/tensorrt/bin/trtexec --batch=128 --iterations=400 --workspace=1024 --percentile=99 --deploy=ResNet50_N2.prototxt --model=ResNet50_fp32.caffemodel --output=prob --int8
elif [ "${testmode}" -eq 2 ]; then
/workspace/tensorrt/bin/trtexec --batch=128 --iterations=400 --workspace=1024 --percentile=99 --deploy=ResNet50_N2.prototxt --model=ResNet50_fp32.caffemodel --output=prob --fp16
else
echo -e "input wrong !!!"
fi
# =====it is in tensorrt container=====
```
in benchmark.sh (This script help you to run only 1000 times of each with int8 and fp16)
* you can change the value in i<1000 as per your desire and run.
```bash=
#!/bin/bash
# =====it is in tensorrt container=====
for ((i=1;i<1000;i++))
do
/workspace/tensorrt/bin/trtexec --batch=128 --iterations=400 --workspace=1024 --percentile=99 --deploy=ResNet50_N2.prototxt --model=ResNet50_fp32.caffemodel --output=prob --int8
done
for ((i=1;i<1000;i++))
do
/workspace/tensorrt/bin/trtexec --batch=128 --iterations=400 --workspace=1024 --percentile=99 --deploy=ResNet50_N2.prototxt --model=ResNet50_fp32.caffemodel --output=prob --fp16
done
# =====it is in tensorrt container=====
```
after that make the script executeable for linux
```bash=
# =====it is in tensorrt container=====
chmod 777 ./benchmark.sh
./benchmark.sh
# =====it is in tensorrt container=====
```