or
or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up
Syntax | Example | Reference | |
---|---|---|---|
# Header | Header | 基本排版 | |
- Unordered List |
|
||
1. Ordered List |
|
||
- [ ] Todo List |
|
||
> Blockquote | Blockquote |
||
**Bold font** | Bold font | ||
*Italics font* | Italics font | ||
~~Strikethrough~~ | |||
19^th^ | 19th | ||
H~2~O | H2O | ||
++Inserted text++ | Inserted text | ||
==Marked text== | Marked text | ||
[link text](https:// "title") | Link | ||
 | Image | ||
`Code` | Code |
在筆記中貼入程式碼 | |
```javascript var i = 0; ``` |
|
||
:smile: | ![]() |
Emoji list | |
{%youtube youtube_id %} | Externals | ||
$L^aT_eX$ | LaTeX | ||
:::info This is a alert area. ::: |
This is a alert area. |
On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?
Please give us some advice and help us improve HackMD.
Do you want to remove this version name and description?
Syncing
xxxxxxxxxx
Running Gromacs at UPPMAX
This page describes how to run the GROMACS molecular dynamics software on UPPMAX systems. See the gromacs web page for more information.
Have a look on this page as well - best practices running GROMAC on HPC.
Selected setups for benchmarking on HPC2N as examples.
Loading the gromac module
SBATCH script
How important is to select appropriate options
Here is a simple benchmark ran on single interactive node with 20CPUs using the MEM example from this benchmark https://www.mpibpc.mpg.de/grubmueller/bench.
where XX * YY = 20
Notice how bad is the last run
$ mpirun -np 1 gmx_mpi mdrun -ntomp 20 -s MEM.tpr -nsteps 10000 -resethway
(lines 25-26)According to this short test, this particular setup runs best on single Rackham node with
$ mpirun -np 10 gmx_mpi mdrun -ntomp 2 -s MEM.tpr -nsteps 10000 -resethway
(lines 8-10)Running older versions of gromacs
Versions 4.5.1 to 5.0.4:
The gromacs tools have been compiled serially. The mdrun program has also been compiled in parallel using MPI. The name of the parallel binary is mdrun_mpi.
Run the parallelized program using:
… where XXX is the number of cores to run the program on.
Version 5.1.1
The binary is gmx_mpi and (e.g.) the mdrun command is issued like this:
Contacts:
tags:
UPPMAX
,SNIC