---
tags: User
---
<a href="https://hackmd.io/@teoroo-cluster" alt="Teoroo Cluster">
<img src="https://img.shields.io/badge/Teoroo--CMC-All%20Public%20Notes-black?style=flat-square" /></a>
# Teoroo cluster: softwares
Below is a list of commonly used software availble in the Teoroo cluster,
and basic instructions about their usage. (**Contribution welcomed!**)
## Common tools
### ssh-agent
You are recommanded to access remote machines with **password protected**
ssh keys. The key file should idealy stay in your local computer. `ssh-agent`
provides a way to "forward" your key when you login to different machines.
Below is a brief introduction to generate the key and use `ssh-agent`,
you can also find the introduction by, e.g., [Github](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent).
First create the key and add the public key to where you want to login:
```bash
ssh-keygen # and follow the instruction, do use a password!
# copy the key to the remote server e.g. rackham
ssh-copy-id myname@some.remote.server
```
Then, add the following lines to your `./bashrc`, or excecute them when you need:
```bash
eval "$(ssh-agent -s)"
ssh-add
```
This starts up a ssh key agent and adds your private key to it.
You can verify that you have a running `ssh-agent` by:
```bash
ssh-add -L
```
You should see something like:
```
ssh-rsa AAAAB****VERY**********************************
*****************LONG**********************************
****************STRING****Nf9T /home/myname/.ssh/id_rsa
```
This is your public ssh key that you could put on servers
you would like to login (`ssh-copy-id` above did so for you),
or provide to your [git provider](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account).
After setting up the `ssh-agent`, you can tell ssh to forward
your key to a remote server, by adding the `-A` option when
your connect:
```bash
ssh -A myname@some.remote.server
```
You can verify that you have the correct ssh key by
checking `ssh -L` again, after you login. To make
forwarding the default when logging in to the server,
add this to your `$HOME/.ssh/config`:
```config
Host some.remote.server
ForwardAgent yes
```
> **ssh-agent with gpg key**
>The above setup should work, the only drawback is that you have to type
>the password whenever the key is requested. You perhaps want to stop
>ssh from asking for the key if it's just typed. One way is to use the
>gpg agent to provide ssh authentication, you can read more [here](https://opensource.com/article/19/4/gpg-subkeys-ssh). If you use a desktop
>environment such as Gnome, there might also be gui tools for this, such as
>the [gnome keyring](https://wiki.gnome.org/Projects/GnomeKeyring/Ssh).
## MD Packages
### GROMACS
The 2018.8 version of Gromacs is available at `/sw/gromacs/`, to activate it:
```
source /sw/gromacs/bin/GMXRC
```
> Newer versions are bit hard to compile, you might find better luck with singularity.
> -- Yunqi
### LAMMPS
> LAMMPS is a classical molecular dynamics (MD) code that models ensembles of particles in a liquid, solid, or gaseous state. -- [Official Webpage](https://docs.lammps.org/Intro_overview.html)
LAMMPS is a rather flexible MD code, where a lot of [extensions](https://docs.lammps.org/Build_package.html)
exist. Typically, you could compile your own binary if you have some
specific need. On Teoroo2, a compiled version based on the stable release (29Sep2021) is provided. We have included the basics (kspace, manybody, molecule, rigid) and reaxff package.
The binary should be available at:
```bash
/sw/lammps/29Sep2021/lmp_serial
# mpirun -np 4 /sw/lammps/29Sep2021/lmp_mpi
```
The mpi binary is compiled against openmpi-3.1.4, to use it, activate the MPI library with:
```bash
source /sw/env/openmpi-3.1.4
```
Instruction if you would like to compile your own binary:
```bash
git clone https://github.com/lammps/lammps.git
cd lammps
git checkout stable_29Sep2021 # if you need some specific version
cd src
source /sw/env/openmpi-3.1.4 # openmpi installation on the cluster
make yes-basic # you'll usually want these
make yes-reaxff # add other packages if needed
make serial # for the serial ver.
make mpi # for the openmpi ver.
```
## Quantum Chemistry Packages
### CP2K
#### Building CP2K
##### With EasyBuild
##### Containers
#### Known issues
### Gaussian09
Gaussian09 is also available on brosnan.
To use it, run:
```bash
source /home/sw_old/gaussian/g09_d.01_setup_intel.sh
```
Afterwards run:
```bash
g09 input.inp output.out
```
as usual.
### ORCA
The ORCA binary compiled with MPI is avaiable in `/sw/orca-4.2.1-openmpi314/`.
To use it with MPI you also need to run (works on `brosnan` and `jackie`):
```bash
source /sw/env/openmpi-3.1.4
```
### VASP
Compiled vasp binaries exist in: `/sw/vasp/bin`.
## Visualization
### Blender
From their homepage:
> Blender is the free and open source 3D creation suite. It supports the entirety of the 3D pipeline—modeling, rigging, animation, simulation, rendering, compositing and motion tracking, video editing and 2D animation pipeline.
>
Blender is installed on the GPU node `Brosnan`. If you are interested
in using the GPU nodes for heavy rendering task, please contact @yqshao
(instructions will be provided later).
### GaussView
```bash
source /home/sw_old/gaussian/gview_5.0_g09_intel_setup.sh
```
then start GaussView by running
```bash
gview
```
### ParaView
> ParaView is an open-source, multi-platform data analysis and visualization application. ParaView users can quickly build visualizations to analyze their data using qualitative and quantitative techniques. The data exploration can be done interactively in 3D or programmatically using ParaView’s batch processing capabilities.
On Teoroo2, ParaView is available as singularity images built from the docker image provided by [Openfoam](https://hub.docker.com/u/openfoam).
To run paraview:
```
/sw/paraview
```
Two versions are provided:
- `/sw/paraview`(`/sw/paraview54`) -> paraview54 built with openfoam 5
- `/sw/paraview56` -> paraview56 built with openfoam 8
### VESTA
```bash
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/sw_old/VESTA-x86_64
```
### VMD
VMD can now be used on teoroo2 (brosnan) for visualization.
To run vmd:
```
/sw/vmd
```
## Workflow tools
### Nextflow
> Nextflow enables scalable and reproducible scientific workflows using software containers. It allows the adaptation of pipelines written in the most common scripting languages.
Though no queuing system is implemented on Teoroo2, with a specific [fork](https://github.com/yqshao/nextflow/tree/local_accelerator) of Nextflow it is possible to schedule multiple GPU jobs. A compiled binary can be found in `/sw/nf` on Teoroo2.
To use GPUs with nextflow, first set the environment variable `CUDA_VISIBLE_DEVICES` (you can find a list of available cards with the command `nvidia-smi`). For instance `export CUDA_VISIBLE_DEVICES=1,2,3` means three graphics cards with corresponing ids
will be used in your subsequent script execution.
Then, set the `accelerator` [directives](https://www.nextflow.io/docs/latest/process.html#accelerator) for your process in your script or in the Nextflow [config](https://www.nextflow.io/docs/latest/config.html#scope-process).
Nextflow will set the `CUDA_VISIBLE_DEVICES` variable when you script is
executed, which should be picked up automatically, e. g. Tensorflow.
## Machine learning
### PiNN
#### Installation
##### with singularity
Docker images for `PiNN` is built continuously. To build a singularity
image from the latest development branch:
```
singularity build docker://yqshao/pinn:latest[-gpu]
```
If you need some extra dependency you might also setup your
own docker image like is done [here](https://github.com/yqshao/DeltaML/blob/master/Dockerfile).
##### Alvis
For PiNN developers you might wish to install PiNN in editable mode.
In that case you might want to use the existing python packages on the
HPC cluster
```bash
# load available modules on HPC to avoid extra dependency
module load GCCcore/10.2.0 git/2.28.0-nodocs GCC/10.2.0 CUDA/11.1.1 \
OpenMPI/4.0.5 TensorFlow/2.5.0 PyYAML matplotlib
virtualenv --system-site-packages pinn_env
source pinn_env/bin/activate
pip install git+https://github.com/yqshao/PiNN.git@TF2
```
#### Usage
##### with your own job script
If you are running on our local cluster (Brosnan and Jackie),
you should rememeber to set the environment variable
`CUDA_VISIBLE_DEVICES`, you can find a list of available
GPUs with `nvidia-smi` and monitor their usage.
Please negotiate with other group
members regarding usage of GPUs.
On a HPC cluster, you GPUs should be assigned automatically.
You can monitor your job by ssh into the job node and run
`nvidia-smi`. The cluster might also provide a `grafana`
link for monitoring purpose upon submission of job.
See the HPC documentation (e.g. for [Alvis](https://www.c3se.chalmers.se/documentation/intro-alvis/slides/#allocating-gpus-on-alvis))
for information regarding available resourses and submission scripts.
:::info
Specific notes about Avlis: all jobs must be come with a `--gres=gpu:model:number` request.
:::
#### with nextflow
If you arrange your jobs with nextflow the setup of environment
should be easy, you can include the following snippet to you
`nextflow` config, the "prod" [profile](https://www.nextflow.io/docs/latest/config.html#config-profiles) uses the docker
image from the `TIPS` project and the "dev" uses the
[previous](#Alvis) python virtual environment.
Also note that you can change the job setup for procesess with
[selectors](https://www.nextflow.io/docs/latest/config.html#process-selectors).
```nextflow
profiles {
prod {
process {
executor = 'slurm'
image = 'yqshao/tips:pinn-gpu'
clusterOptions = '--gres=gpu:T4:1'
withLabel: pinn {time='48h'}
}
}
dev {
process {
executor = 'slurm'
module = 'GCC/8.3.0:CUDA/10.1.243:OpenMPI/3.1.4:TensorFlow/2.3.1-Python-3.7.4:PyYAML:matplotlib'
beforeScript = 'source $HOME/my_env/bin/activate'
clusterOptions = '--gres=gpu:T4:1'
withLabel: pinn {time='48h'}
}
}
executor {
name = 'slurm'
queueSize = 50
submitRateLimit = '20 min'
}
}
```
## Development
### Python
We have a singularity image with Jupyter and commonly used python package intalled at `/sw/Singularity/jupyter.2019.04.25.sif`. To use it, simply
run the image as an executable, it will start the Jupyter server.
You can forward the port to your local machine through ssh to access the Jupyter notebook.
### Nvidia toolkit
Cuda toolkits are installed on the GPU nodes `brosnan` and `jackie`
To compile source code with specific cuda versions, run:
```bash
export PATH=/usr/local/cuda-10.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
```