---
# System prepended metadata

title: RHSTACK-Torch

---

# PyTorch Community Workshop Guide w/ (Fedora 43 Container)

> Beginner-friendly guide for testing **PyTorch nightly builds** in a container environment.  
No prior container or PyTorch experience required.

## 1. Create a Red Hat Account

1. Open: https://accounts.redhat.com/register  
2. Fill in your details  
3. Verify your email and log in

✅ Done — this gives you access to community collaboration tools.

## 2. Create a Quay.io Account

1. Open: https://quay.io
2. Click **Sign Up**
3. Choose **Sign in with Red Hat**
4. Approve access

✅ Done — you now have a container registry account.

## 4. Pull the PyTorch Test Container

There are **two variants**:

| Tag | Description | Hardware Needed |
|---|---|---|
| `latest` | CPU-only, works everywhere | None |
| `cuda` | GPU-enabled (nightly CUDA build) | NVIDIA GPU + driver |

### CPU (Works Everywhere)
```bash
docker pull quay.io/sumantro/rhstack-commnuity-torch/pytorch-fedora43:latest
# OR
podman pull quay.io/sumantro/rhstack-commnuity-torch/pytorch-fedora43:latest
```

### CUDA (Requires NVIDIA GPU + `nvidia-container-toolkit`)
```bash
podman pull quay.io/sumantro/rhstack-community-torch/pytorch-fedora43:cuda
```

## 5. Create a Directory to Store Test Logs

```bash
mkdir -p results
chmod a+rwx results
```

✅ Logs stored here **will persist outside** the container.

## 6. Run the Container

### CPU Container
```bash
podman run -it --rm   -v "$(pwd)/results:/results:Z,U"   quay.io/sumantro/pytorch-fedora43:latest   bash
```

### CUDA Container
```bash
docker run -it --rm --gpus all   --ipc=host --shm-size=8g   -v "$(pwd)/results:/results"   quay.io/sumantro/pytorch-fedora43:cuda   bash
```

Once inside, prompt should look like:
```
tester@container:/workspace$
```

## 7. Run Tests Inside the Container

Navigate to the PyTorch source tree included in the image:
```bash
cd /opt/pytorch
```

Run core tests and **save logs**:

```bash
pytest -vv   test/test_nn.py   test/test_torch.py   test/test_ops.py   test/test_unary_ufuncs.py   test/test_binary_ufuncs.py   test/test_autograd.py   --junitxml=/results/test-results.xml   2>&1 | tee /results/test-output.log
```

### Log Files Produced (on your **host**):
```
./results/test-output.log
./results/test-results.xml
```

## 8. Check Environment Info (Optional)
```bash
python - <<EOF
import torch
print("PyTorch:", torch.__version__)
print("CUDA available:", torch.cuda.is_available())
print("CUDA runtime:", torch.version.cuda)
EOF
```
