# Real ESRGAN Application Deployment
This simply denote what I do when I am doploying the *Real ESRGAN* to my WSL distro.
> Reference \:
> * Github Repository
> https://github.com/xinntao/Real-ESRGAN
> * Real-ESRGAN超分演算法效果
> https://blog.csdn.net/bornfree5511/article/details/141547609
# Deploying Hardware
## Hardware List
**CPU** : *E5-2696v3*
**GPUs** :
* *Nvidia Quadro T400*
* *Nvidia Tesla P40*
**RAM** : *non-ECC 2666 16 \* 8 128GB \(run with 2133\)*
**MB** : ASUS X99-AII
**Disks** :
* *Team T-FORCE Z44A7Q 2TB M.2 PCIe Gen4*
* *MSI SPATIUM M480 PRO 4TB Gen4 PCIe SSD*
**Power** : *Seasonic X-series 1050W*
## Important Notes
### CPU
CPU is a very cheap \(2025\/07\/20\) old Xeon which is publish in 2015. E5-2696v3 is the OEM version of E5-2699v3.
```bash
1[| 2.0%] 10[ 0.0%] 19[| 2.6%] 28[|||| 10.0%]
2[|||||| 16.4%] 11[||| 4.6%] 20[ 0.0%] 29[| 2.6%]
3[||||||||||| 30.9%] 12[|| 3.3%] 21[||| 7.9%] 30[ 0.0%]
4[| 1.3%] 13[|| 3.3%] 22[ 0.0%] 31[|| 5.3%]
5[|||||||| 19.6%] 14[|| 3.9%] 23[| 0.7%] 32[|| 3.9%]
6[| 2.6%] 15[||||| 14.6%] 24[| 2.6%] 33[| 2.6%]
7[|||| 9.3%] 16[| 3.3%] 25[| 2.0%] 34[ 0.0%]
8[| 2.6%] 17[|||| 7.3%] 26[|| 3.3%] 35[|||| 8.6%]
9[|||||||||||||||||||||||||80.5%] 18[||||| 13.8%] 27[|||| 9.9%] 36[|| 3.3%]
Mem[|||||| 5.21G/110G] Tasks: 33, 413 thr; 3 running
Swp[ 0K/0K] Load average: 2.08 2.46 2.56
Uptime: 21:04:59
```
### GPU
I use Nvidia Tesla P40 to do the computation and notice that despite having 24G of VRAM, the compute power is way lower than my main server Nvidia RTX A5000 * 2 with Nvlink.
```bash
Every 2.0s: nvidia-smi DESKTOP-4O8R2UE: Tue Jul 22 17:59:05 2025
Tue Jul 22 17:59:05 2025
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.147.01 Driver Version: 529.19 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA T400 On | 00000000:01:00.0 Off | N/A |
| 38% 47C P8 N/A / 31W | 503MiB / 2048MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Tesla P40 On | 00000000:03:00.0 Off | Off |
| N/A 54C P0 72W / 250W | 975MiB / 24576MiB | 24% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 25430 C /python3.10 N/A |
| 1 N/A N/A 25430 C /python3.10 N/A |
+-----------------------------------------------------------------------------+
```
Apparently, the GPU is not fully utilized when running `realesr-animevideov3`. However, with other models, it can be fully utilized e.g. `RealESRGAN_x4plus_anime_6B`.
### Disk
# Environment Setting
## WSL Configuration
For WSL system, the following command should be added to `.bashrc`.
```bash
export PATH="/usr/local/cuda/bin:$PATH"
export PATH="/usr/lib/wsl/lib:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH="/usr/lib/wsl/lib":$LD_LIBRARY_PATH
export CUDA_HOME="/usr/local/cuda"
export NUMBA_CUDA_DRIVER="/usr/lib/wsl/lib/libcuda.so.1"
export MESA_D3D12_DEFAULT_ADAPTER_NAME="NVIDIA Tesla P40"
export CUDA_VISIBLE_DEVICES=0
```
Notice that I have to set `CUDA_VISIBLE_DEVICES` since I don't want Quadro T400 to be seen by WSL.
In order to use lower version of python, I use `Ubuntu 22.04` for my WSL distro. That is the merite of using WSL since you can keep multiple distro spinning and they can share resources without slicing the system hardware.
## Python Configuration
Notice that the python version should not be too high and the Pytorch version has to be lower even if the requirement does not specify a higher bound. The following version of packages is what I found that need to be changed with default install on `Ubuntu 22.04`
* `Python 3.10.12`
* `Pytorch 2.0.1`
* `torchvision 0.15.2`
* `numpy 1.26.4`
* `ffmpeg-python 0.2.0`
# Usage \& Outputs
## Help page of the Python Script
```bash
$ python inference_realesrgan_video.py -h
/home/erebus/Real_ESRGAN_venv/.venv/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
usage: inference_realesrgan_video.py [-h] [-i INPUT] [-n MODEL_NAME] [-o OUTPUT] [-dn DENOISE_STRENGTH]
[-s OUTSCALE] [--suffix SUFFIX] [-t TILE] [--tile_pad TILE_PAD]
[--pre_pad PRE_PAD] [--face_enhance] [--fp32] [--fps FPS]
[--ffmpeg_bin FFMPEG_BIN] [--extract_frame_first]
[--num_process_per_gpu NUM_PROCESS_PER_GPU]
[--alpha_upsampler ALPHA_UPSAMPLER] [--ext EXT]
options:
-h, --help show this help message and exit
-i INPUT, --input INPUT
Input video, image or folder
-n MODEL_NAME, --model_name MODEL_NAME
Model names: realesr-animevideov3 | RealESRGAN_x4plus_anime_6B | RealESRGAN_x4plus |
RealESRNet_x4plus | RealESRGAN_x2plus | realesr-general-x4v3Default:realesr-
animevideov3
-o OUTPUT, --output OUTPUT
Output folder
-dn DENOISE_STRENGTH, --denoise_strength DENOISE_STRENGTH
Denoise strength. 0 for weak denoise (keep noise), 1 for strong denoise ability. Only
used for the realesr-general-x4v3 model
-s OUTSCALE, --outscale OUTSCALE
The final upsampling scale of the image
--suffix SUFFIX Suffix of the restored video
-t TILE, --tile TILE Tile size, 0 for no tile during testing
--tile_pad TILE_PAD Tile padding
--pre_pad PRE_PAD Pre padding size at each border
--face_enhance Use GFPGAN to enhance face
--fp32 Use fp32 precision during inference. Default: fp16 (half precision).
--fps FPS FPS of the output video
--ffmpeg_bin FFMPEG_BIN
The path to ffmpeg
--extract_frame_first
--num_process_per_gpu NUM_PROCESS_PER_GPU
--alpha_upsampler ALPHA_UPSAMPLER
The upsampler for the alpha channels. Options: realesrgan | bicubic
--ext EXT Image extension. Options: auto | jpg | png, auto means using the same extension as
inputs
```
## Sample Output \#1
```bash
python inference_realesrgan_video.py -i "../inputs/1.mp4" -o "../HR/1.mp4" -n RealESRGAN_x4plus_anime_6B
inference: 44544frame [4:32:00, 2.73frame/s]
```
## Sample Output \#2
```bash
$ python inference_realesrgan_video.py -i "../inputs/2.mp4" -o "../HR/2.mp4" -n realesr-animevideov3
/home/erebus/Real_ESRGAN_venv/.venv/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
Downloading: "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth" to /home/erebus/Real_ESRGAN_venv/Real-ESRGAN/weights/realesr-animevideov3.pth
100%|███████████████████████████████████████████████████████████████████████| 2.39M/2.39M [00:00<00:00, 8.34MB/s]
inference: 44540frame [1:28:06, 8.43frame/s]
```
## Summary
Apparently, different model will takeout different size of vram memory and utilization of GPU.
# Python Script for Running with a Batch of Files in a Folder
[Github Repository](https://github.com/Chen-KaiTsai/Batch-Real-ESRGAN-Video-with-Other-CV-Tools)