# CVAT Server with Docker on Pop! OS 22.04
###### tags: `CVAT` `Docker` `Pop OS`

## Enable SSH
```
sudo apt update
```
The above commands will initiate the install process. When a password request is made please type in your “root” user password to continue. If prompted with a Y/N to continue type in Y and press Enter.
```
sudo apt install openssh-server
```
Next, we will allow SSH firewall rules by issuing the following command. In some instances, SSH firewall rules do not need to be manually specified.
```
sudo ufw allow ssh
```
**Starting the (SSH) Service automatically on Linux**
Once installed, the SSH service will start automatically on Linux during boot. If you want to make sure that the SSH service started up properly, initiate the following commands.
```
sudo systemctl status ssh
```
## Install Miniconda
[Miniconda download](https://repo.anaconda.com/miniconda/Miniconda3-py39_22.11.1-1-Linux-x86_64.sh)
```
bash Miniconda3-py39_22.11.1-1-Linux-x86_64.sh
```
## Install NVIDIA CUDA Toolkit
To install the CUDA toolkit, please run this command:
```
sudo apt install system76-cuda-latest
```
To install the cuDNN library, please run this command:
```
sudo apt install system76-cudnn-11.2
```
To verify installation, run this command after a ==reboot==:
```
nvcc -V
```
## Install Docker
```
sudo apt update
sudo apt install ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io -y
sudo systemctl status docker
```
### Install Docker Docker-compose
```
sudo apt-get update
sudo apt install docker-compose
```
### Install nvidia-docker2
It seems a problem was introduced with Pop OS 21.04 but to resolve this problem you need to follow these steps :
```
distribution=ubuntu22.04 && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
```
https://github.com/NVIDIA/nvidia-docker/issues/1388
`/etc/apt/preferences.d/nvidia-docker-pin-1002` add
```
Package: *
Pin: origin nvidia.github.io
Pin-Priority: 1002
```
https://github.com/pop-os/pop/issues/1708
`/etc/apt/preferences.d/pop-default-settings` add
```
Package: *
Pin: origin nvidia.github.io
Pin-Priority: 1002
```
```
sudo apt update
```
this will install all the dependencies
```
sudo apt install nvidia-docker2
```
```
sudo systemctl restart docker
```
Test
`sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi`
```
Wed Dec 28 16:10:11 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.11 Driver Version: 525.60.11 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:22:00.0 On | N/A |
| 0% 42C P8 35W / 175W | 296MiB / 8192MiB | 3% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
```
## Install CVAT
You can use docker-compose config file to easily run the latest [CVAT release](https://hub.docker.com/r/cvat/server):
```
git clone https://github.com/cvat-ai/cvat
cd cvat
docker-compose up -d
docker exec -it cvat_server bash -ic 'python3 ~/manage.py createsuperuser'
```
### How to build CVAT images
To build images yourself include docker-compose.dev.yml compose config file to docker compose command. This can be useful if you want to ==build a CVAT with some source code changes==.
```
docker compose -f docker-compose.yml -f docker-compose.dev.yml build
```
### Chage ip adress
- #### ==If you chage the ip address need to re-build the `nuclio`==
```
export CVAT_HOST=192.168.0.102
```
### Semi-automatic and Automatic Annotation
Information about the installation of components needed for semi-automatic and automatic annotation.
>
**⚠ WARNING: Do not use `docker compose up`** If you did, make sure all containers are stopped by `docker compose down`.
- To bring up cvat with auto annotation tool, from cvat root directory, you need to run:
```
docker-compose -f docker-compose.yml -f components/serverless/docker-compose.serverless.yml up -d
```
If you did any changes to the Docker Compose files, make sure to add `--build` at the end.
To stop the containers, simply run:
```
docker-compose -f docker-compose.yml -f components/serverless/docker-compose.serverless.yml down
```
- You have to install `nuctl` command line tool to build and deploy serverless functions. Download [version 1.8.14](https://github.com/nuclio/nuclio/releases/tag/1.8.14). It is important that the version you download matches the version in [docker-compose.serverless.yml](https://github.com/cvat-ai/cvat/blob/develop/components/serverless/docker-compose.serverless.yml). For example, using wget.
```
wget https://github.com/nuclio/nuclio/releases/download/1.8.14/nuctl-1.8.14-linux-amd64
```
After downloading the nuclio, give it a proper permission and do a softlink.
```
sudo chmod +x nuctl-1.8.14-linux-amd64
sudo ln -sf $(pwd)/nuctl-1.8.14-linux-amd64 /usr/local/bin/nuctl
```
- Create `cvat` project inside nuclio dashboard where you will deploy new serverless functions and deploy a couple of DL models. Commands below should be run only after CVAT has been installed using `docker compose` because it runs nuclio dashboard which manages all serverless functions.
```
nuctl create project cvat
```
```
nuctl deploy --project-name cvat \
--path serverless/openvino/dextr/nuclio \
--volume `pwd`/serverless/common:/opt/nuclio/common \
--platform local
```
```
nuctl deploy --project-name cvat \
--path serverless/openvino/omz/public/yolo-v3-tf/nuclio \
--volume `pwd`/serverless/common:/opt/nuclio/common \
--platform local
```
**Note:**
- See [deploy_cpu.sh](https://github.com/cvat-ai/cvat/blob/develop/serverless/deploy_cpu.sh) for more examples.
- #### GPU Support [](https://opencv.github.io/cvat/docs/administration/advanced/installation_automatic_annotation/#gpu-support)
You will need to install [Nvidia Container Toolkit](https://www.tensorflow.org/install/docker#gpu_support). Also you will need to add `--resource-limit nvidia.com/gpu=1 --triggers '{"myHttpTrigger": {"maxWorkers": 1}}'` to the nuclio deployment command. You can increase the maxWorker if you have enough GPU memory. As an example, below will run on the GPU:
```
nuctl deploy \
--project-name cvat \
--path serverless/pytorch/ultralytics/yolov5/nuclio/ \
--volume `pwd`/serverless/common:/opt/nuclio/common \
--platform local --resource-limit nvidia.com/gpu=1
```
- #### costmus yolov5
Reaplce `init_context` function in `nuclio/main.py` to load costmus model weight.
```
def init_context(context):
context.logger.info("Init context... 0%")
# Read the DL model
model = torch.hub.load(
'/opt/nuclio/yolov5',
'custom',
path='/opt/nuclio/best.pt',
source='local') # or yolov5m, yolov5l, yolov5x, custom
context.user_data.model = model
context.logger.info("Init context...100%")
```
change detect object in `function.yaml`.
```
metadata:
name: ultralytics-yolov5-TC3
namespace: cvat
annotations:
name: YOLO v5 TC3
type: detector
framework: pytorch
spec: |
[
{ "id": 0, "name": "illegal" },
{ "id": 1, "name": "safety" },
{ "id": 2, "name": "cone" },
{ "id": 3, "name": "truck" },
{ "id": 4, "name": "chemical" },
{ "id": 5, "name": "filling" }
]
```
### Connected file share
- To stop the containers, simply run:
```
docker-compose -f docker-compose.yml -f components/serverless/docker-compose.serverless.yml down
```
- add file `docker-compose.override.yml`:
```
version: '3.3'
services:
cvat_server:
volumes:
- cvat_share:/home/django/share:ro
cvat_worker_default:
volumes:
- cvat_share:/home/django/share:ro
volumes:
cvat_share:
driver_opts:
type: none
device: /mnt/Data/share
o: bind
```
```
chmod -R 755 /mnt/Data/share
```
- Start the containers with `docker-compose.override.yml`
```
docker-compose \
-f docker-compose.yml \
-f docker-compose.override.yml \
-f components/serverless/docker-compose.serverless.yml up -d
```
### Fix Bug
- ### siammask
In `serverless/pytorch/foolwood/siammask/nuclio/function.yaml`
Miniconda3-latest-Linux-x86_64.sh need to use `bash` to run
```
- kind: RUN
value: wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh &&
chmod +x Miniconda3-latest-Linux-x86_64.sh
- kind: RUN
value: bash ./Miniconda3-latest-Linux-x86_64.sh -b &&
rm -f Miniconda3-latest-Linux-x86_64.sh
```