[TOC] ## Basic running example 1. Clone [the repository](https://github.com/okdimok/gpu_compose_example) to the **local** (i.e. not mounted) file system. ``` git clone https://github.com/okdimok/gpu_compose_example.git ``` 2. Build ``` docker compose -f gpu_compose_example/main-example-compose/docker-compose.yml build ``` 3. Run ``` docker compose -f gpu_compose_example/main-example-compose/docker-compose.yml up ``` 4. Connect to port **8000** on the host, as specified in the Jupyter Command. ## Simpler Example Without Compose ```bash docker run --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 \ --gpus='"device=0"' \ --volume $HOME:$HOME --volume $HOME:/root \ --network host \ -shm-size=4gb \ --name inference_calculator \ --tmpfs /tmp \ nvcr.io/nvidia/pytorch:23.10-py3 ``` ## Additional Examples [Train Yolo V4 in TAO](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/resources/cv_samples/version/v1.4.1/files/yolo_v4/yolo_v4.ipynb)