# Access RTX Graphics Card in `systemd-spawn` Chroot Enviroment (Ubuntu)
> Author: Junner
> Date: 2025/5/13
> Last Update: 2025/9/29
I need to run LLM in Chroot container. But using CPU to process language model is not a smart idea. So here, I'm going to lead you how to access an RTX graphic card inside the enviroment.
## 1. Bind Nvidia Device Node
```
mkdir -p /etc/systemd/nspawn/
cd /etc/systemd/nspawn/
```
```
nano agents.nspawn
```
Add these for binding Nvidia device with the container
```
[Files]
Bind=/dev/nvidia0
Bind=/dev/nvidiactl
Bind=/dev/nvidia-modeset
Bind=/dev/nvidia-uvm
Bind=/dev/nvidia-uvm-tools
```
You can also add this, for making system not to use `--network-veth` for container.
```
[Network]
Private=no
```
## 2. Configurate DeviceAllow Drop-in Setting
Make a Drop-in configuration. The rule of directory name for a drop-in setting is `<unit>.d`. Here I certain it as `systemd-nspawn@agents.service.d`.
```
mkdir -p /etc/systemd/system/systemd-nspawn@agents.service.d/
cd /etc/systemd/system/systemd-nspawn@agents.service.d/
```
I recommend name it as `50-<name>.conf`, usually `50-DeviceAllow.conf`. 50- something indicates **authorization** for device. There are also other types of systemd config, for example, `10-logging.conf` for log setting, `99-local.conf` for final setting.
The number of it matters the read order, the bigger one **override** the config with small number if them conflicts.
```
nano 50-DeviceAllow.conf
```
Add these for authorizing the bind points.
```
[Service]
DeviceAllow=/dev/nvidia0 rwm
DeviceAllow=/dev/nvidiactl rwm
DeviceAllow=/dev/nvidia-modeset rwm
DeviceAllow=/dev/nvidia-uvm rwm
DeviceAllow=/dev/nvidia-uvm-tools rwm
```
Don't forget reload unit files.
```
sudo systemctl daemon-reload
```
And Now we can restart the service with
```
sudo machinectl terminate agents
sudo systemctl start systemd-nspawn@agents
```
Check the status
```
sudo systemctl status systemd-nspawn@agents
```
If you see it use the 50-DeviceAllow.conf and activating, it means we sucess.
Drop-In: /etc/systemd/system/systemd-nspawn@agents.service.d
└─50-DeviceAllow.conf
Active: active (running) since Sun 2025-05-11 10:44:26 CST; 6s ago
## 3. GPU Driver Installation
On host, use `nvidia-smi` to show what version the driver is.
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.133.07 Driver Version: 570.133.07 CUDA Version: 12.8 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 Off | Off |
| 0% 38C P8 19W / 450W | 112MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 3219 G /usr/lib/xorg/Xorg 70MiB |
| 0 N/A N/A 3362 G /usr/bin/gnome-shell 13MiB |
+-----------------------------------------------------------------------------------------+
You can see the version is `570.133.07`. And we should install the **same** version in the container.
Go to The official website of Nvidia driver, and look for your version. Mine was:
```
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/570.133.07/NVIDIA-Linux-x86_64-570.133.07.run
```
Copy it into you're container. Go into the container and add execution permission to it.
```
sudo chmod +x NVIDIA-Linux-x86_64-570.133.07.run
```
Run with `--no-kernel-module` cause container share the same kernel with host. **Donnot** reinstall it.
```
sudo ./NVIDIA-Linux-x86_64-570.133.07.run --no-kernel-module
```
And you're going to choose the kernel module types, choose `Nvidia Proprietary` rather than MIT/GPL.
After installation, test your GPU is workable or not.
Now you can access the device inside a container.
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.133.07 Driver Version: 570.133.07 CUDA Version: 12.8 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 Off | Off |
| 0% 38C P8 19W / 450W | 113MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+