###### tags: `software` `usage guide`
# GROMACS quick usage guide
## Description
GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles and is a community-driven project.
## Installation Status
| 功能 | Taiwania2 |
| :-------------: | :-------------: |
| native gpu | 2023.4<br>$$\surd$$ |
| GPU container | 2023.2<br>$$\surd$$ |
$$\surd \text{ : Tested} $$
$$\triangle \text{ : Not Ready} $$
$$\star \text{ : Untested} $$
$$\times \text{ : Not support} $$
## Installation Path
### Taiwania2
```/opt/ohpc/pkg/gromacs```
## Basic usage
### native app (Will not be updated anymore!!)
```
source setgromacs_2018
mpirun gmx_mpi mdrun -v ${YOUR_INPUT}
```
### container
Use singularity to run application in container on HPC environment.
```
module load singularity
singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd ${GROMACS_SIF} gmx mdrun -v ${YOUR_INPUT}
```
## Step by Step Usage :
### Single Node Single GPU (Taiwania2)
* #### STEP0a
Pull the image in your own folder
```
module load singularity
singularity build GROMACS_2022.1.sif docker://nvcr.io/hpc/gromacs:2022.1
export GROMACS_SIF=${PWD}/GROMACS_2022.1.sif
```
* #### STEP0b (Alternate)
Use the image in pkg folder
```
export GROMACS_SIF=/opt/ohpc/pkg/gromacs/container/gromacs-2022.1.sif
```
* #### STEP1 Download the submission script
```
export CWD=`pwd`
```
```
cp /opt/ohpc/pkg/gromacs/container/gromacs_snsg.sh $CWD
```
* #### STEP2 Modify the submission script and change your wallet account
```
sed 's/JOB_ACCOUNT/XXXXXX/' gromacs_snsg.sh
```
XXXXXX 填入計畫帳號
or you can edit `gromacs_snsg.sh` in your favirote editor and modify it.
* #### STEP3 Prepare your gromacs input files - Run a simple testcase
* ##### STEP3-1 Download the testcase from gromacs homepage
```
wget -c ftp://ftp.gromacs.org/pub/benchmarks/water_GMX50_bare.tar.gz
```
* ##### STEP3-2 Decompression the file
```tar -xvf water_GMX50_bare.tar.gz ```
* ##### STEP3-3 Copy a simple example to current folder
```
cp -r /opt/ohpc/pkg/gromacs/example/water-cut1.0_GMX50_bare ./
```
* ##### STEP3-4 Choose one testcase
```cd water-cut1.0_GMX50_bare/1536/```
* ##### STEP3-5 Convert file format for the benchmark data
```singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd ${GROMACS_SIF} gmx grompp -f pme.mdp```
* #### STEP4 Submit the job and execute the benchmark
* Prepare your submission script, modify the execute part.
```
singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd ${GROMACS_SIF} \
gmx mdrun \
-ntmpi ${GPU_COUNT} \
-nb gpu \
-ntomp ${OMP_NUM_THREADS} \
-pin on \
-v \
-noconfout \
-nsteps 5000 \
-s topol.tpr
```
* ```singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd ${GROMACS_SIF} ``` : don't modify it
* ```gmx mdrun``` : modify if if you want to run different task
* ```-ntmpi ${GPU_COUNT}``` : don't modify it, it depends on --gres=gpu:1
* ```-nb gpu``` : tell singularity that you want to use gpu
* ```-ntomp ${OMP_NUM_THREADS}``` : don't modify it, it depends on --cpus-per-task=4
* ```-pin on``` : tell slurm to enable CPU Pinning
* ```-noconfout``` : tell mdrun not to output
* ```-nsteps 5000 ``` : tell mdrun the max iter runs
* ```-s topol.tpr ``` : specify the tpr file (it should be generated in step3-5)
the you can submit your job via
```sbatch gromacs_snsg.sh```
Here are the submition script
#### gromacs_snsg.sh
```
#!/bin/bash
#SBATCH -J GROMACS_Singlenode_SingleGPU_Job
#SBATCH -A JOB_ACCOUNT
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --ntasks-per-node=1
#SBATCH --gres=gpu:1
#SBATCH --cpus-per-task=4
#SBATCH --time=00:30:00
#SBATCH --job-name=gromacs_job
#SBATCH --output=gromacs_output.txt
#module load
module load singularity
# Initialize the path
export APP_ROOT=$PWD
cd $APP_ROOT
# Script arguments
GPU_COUNT=`echo $SLURM_JOB_GPUS | tr "," " " | wc -w`
GROMACS_SIF=/opt/ohpc/pkg/gromacs/container/gromacs-2022.1.sif
# Set number of OpenMP threads
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1}
# Singularity will mount the host PWD to /host_pwd in the container
SINGULARITY="singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd ${GROMACS_SIF}"
# Prepare benchmark data
${SINGULARITY} gmx grompp -f pme.mdp
# Run benchmark
${SINGULARITY} gmx mdrun \
-ntmpi ${GPU_COUNT} \
-nb gpu \
-ntomp ${OMP_NUM_THREADS} \
-pin on \
-v \
-noconfout \
-nsteps 5000 \
-s topol.tpr
```
### Single Node multiple gpus (Taiwania2)
The SLURM directives for multi gpu requests are the --ntasks=# and --gres=gpu:# .
Replace # to the number of GPUs. (7>#>1). The rest parts are same with single gpu version.
Here are the submition script
#### gromacs_snmg.sh
```
#!/bin/bash
#SBATCH -J GROMACS_Singlenode_MultipleGPU_Job
#SBATCH -A JOB_ACCOUNT
#SBATCH --nodes=1
#SBATCH --ntasks=2
#SBATCH --ntasks-per-node=1
#SBATCH --gres=gpu:2
#SBATCH --cpus-per-task=4
#SBATCH --time=00:30:00
#SBATCH --job-name=gromacs_job
#SBATCH --output=gromacs_output.txt
#module load
module load singularity
# Initialize the path
export APP_ROOT=$PWD
cd $APP_ROOT
# Script arguments
GPU_COUNT=`echo $SLURM_JOB_GPUS | tr "," " " | wc -w`
GROMACS_SIF=/opt/ohpc/pkg/gromacs/container/gromacs-2022.1.sif
# Set number of OpenMP threads
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1}
# Singularity will mount the host PWD to /host_pwd in the container
SINGULARITY="singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd ${GROMACS_SIF}"
# Prepare benchmark data
${SINGULARITY} gmx grompp -f pme.mdp
# Run benchmark
${SINGULARITY} gmx mdrun \
-ntmpi ${GPU_COUNT} \
-nb gpu \
-ntomp ${OMP_NUM_THREADS} \
-pin on \
-v \
-noconfout \
-nsteps 5000 \
-s topol.tpr
```
Note: If you want to use specific version, you can download by the following instruction.
## Fetch image from NGC (optional)
##### SELECT TAG
###### Several GROMACS images are available, depending on your needs.
```
export GROMACS_TAG={TAG}
```
###### For example:
```
export GROMACS_TAG=2023.2
```
##### PULL THE IMAGE
```
module load singularity
export GROMACS_SIF=${PWD}/${GROMACS_TAG}.sif
singularity build ${GROMACS_SIF} docker://nvcr.io/hpc/gromacs:${GROMACS_TAG}
```
#### Contributor: YI-CHENG HSIAO
## Container usage on naive gpu (2023.2)
```
cp -r /opt/ohpc/pkg/gromacs/container/gromacs_2023.2 $HOME/gromacs_job
cd $HOME/gromacs_job
```
### run single node single gpu
```
sbatch gromacs_single_node_single_gpu.sh
```
#### gromacs_single_node_single_gpu.sh
```
#!/bin/bash
#SBATCH --job-name=GromacsSNSG
#SBATCH --account=<JobAccount>
#SBATCH --partition=gp1d
#SBATCH --nodes=1
#SBATCH --gres=gpu:1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=4
#SBATCH --time=00:30:00
#SBATCH --output=gromacs_output.txt
module load singularity
# Initialize the path
export APP_ROOT=$HOME/gromacs_job
cd $APP_ROOT
# Script arguments
GPU_COUNT=`echo $SLURM_JOB_GPUS | tr "," " " | wc -w`
SIMG=${2:-"${PWD}/GROMACS_2023.2.sif"}
# Set number of OpenMP threads
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1}
# Singularity will mount the host PWD to /host_pwd in the container
SINGULARITY="singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd ${SIMG}"
# Prepare benchmark data
${SINGULARITY} gmx grompp -f pme.mdp
# Run benchmark
${SINGULARITY} gmx mdrun \
-ntmpi 1 \
-nb gpu \
-ntomp ${OMP_NUM_THREADS} \
-v \
-nsteps 5000 \
-s stmv.tpr
```
#### view result
```
cat gromacs_output.txt
```
### run single node multiple gpu
```
sbatch gromacs_single_node_single_gpu.sh
```
#### gromacs_single_node_multi_gpus.sh
```
#!/bin/bash
#SBATCH --job-name=GromacsSNMG
#SBATCH --account=<JobAccount>
#SBATCH --partition=gp1d
#SBATCH --nodes=1
#SBATCH --gres=gpu:2
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=4
#SBATCH --time=00:30:00
#SBATCH --output=gromacs_output.txt
#module load
module load singularity
# Initialize the path
export APP_ROOT=$HOME/gromacs_job
cd $APP_ROOT
# Script arguments
GPU_COUNT=`echo $SLURM_JOB_GPUS | tr "," " " | wc -w`
SIMG=${2:-"${PWD}/GROMACS_2023.2.sif"}
# Set number of OpenMP threads
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1}
# Singularity will mount the host PWD to /host_pwd in the container
SINGULARITY="singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd ${SIMG}"
# Run benchmark
${SINGULARITY} gmx mdrun \
-ntmpi ${GPU_COUNT} \
-nb gpu \
-ntomp ${OMP_NUM_THREADS} \
-v \
-nsteps 5000 \
-s stmv.tpr
```
#### view result
```
cat gromacs_output.txt
```
### run multiple node multiple gpu
```
sbatch gromacs_multi_nodes_multi_gpus.sh
```
#### gromacs_multi_nodes_multi_gpus.sh
```
#!/bin/bash
#SBATCH --job-name=GromacsMNMG
#SBATCH --account=<JobAccount>
#SBATCH --partition=gp1d
#SBATCH --nodes=2
#SBATCH --gres=gpu:4
#SBATCH --ntasks-per-node=2
#SBATCH --cpus-per-task=4
#SBATCH --time=00:30:00
#SBATCH --job-name=gromacs_job
#SBATCH --output=gromacs_output.txt
#module load
module load singularity
# Initialize the path
export APP_ROOT=$HOME/gromacs_job
cd $APP_ROOT
# Script arguments
GPU_COUNT=`echo $SLURM_JOB_GPUS | tr "," " " | wc -w`
SIMG=${2:-"${PWD}/GROMACS_2023.2.sif"}
# Set number of OpenMP threads
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1}
# Singularity will mount the host PWD to /host_pwd in the container
SINGULARITY="singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd ${SIMG}"
# Prepare benchmark data
${SINGULARITY} gmx grompp -f pme.mdp
# Run benchmark
${SINGULARITY} gmx mdrun \
-ntmpi ${GPU_COUNT} \
-nb gpu \
-ntomp ${OMP_NUM_THREADS} \
-pin on \
-v \
-nsteps 5000 \
-s stmv.tpr
```
#### view result
```
cat gromacs_output.txt
```
## Module usage on naive gpu (2023.4)
### run task on GPU
```
cp -r /opt/ohpc/pkg/gromacs/2023.4/example $HOME/gromacs_job/
cd $HOME/gromacs_job
sbatch gromacs_gpu_example.sh
```
#### gromacs_gpu_example.sh
```
#!/bin/bash
#SBATCH --job-name=GromacsGPUTask
#SBATCH --nodes=1
#SBATCH --cpus-per-task=4
#SBATCH --time=00:10:00
#SBATCH --account=<JobAccount>
#SBATCH --partition=gp1d
#SBATCH --ntasks-per-node=1
#SBATCH --gres=gpu:1
#SBATCH --output=output.txt
source /opt/ohpc/pkg/gromacs/2023.4/gromacs/GromacsUsage.sh
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1}
cd $HOME/gromacs_job
srun --mpi=openmpi gmx_mpi mdrun \
-v -s stmv.tpr \
-ntomp ${OMP_NUM_THREADS} \
-nb gpu -nsteps 5000
```
#### view result
```
cat $HOME/gromacs_job/output.txt
```
Contributor: SHIH-HSUN WEI