# Purdue Server Running Instructions
* [Available servers list](https://www.cs.purdue.edu/resources/facilities/lwsnservers.html)
* [ML server instruction](https://docs.google.com/document/d/1CPIp60HuqqokeMIi608G4dYhR7Gz6LDz04s_Rro7mKg/edit?usp=sharing)
## How to use `cuda.cs.purdue.edu`
1. Unload all modules `module purge`
2. Load the anaconda module with desired python version `module load anaconda`
3. If the ML application reuqires Cuda and CuDNN `module load cuda/10.2`
* Check GPU avalibility `torch.cuda.is_availabel()`
5. Create your own virtual environment
## How to use Scholar Cluster to run Pytorch
[PyTorch Tutorial for Purdue CS69000-DPL](https://www.cs.purdue.edu/homes/ribeirob/courses/Spring2020/lectures/01/pyTorch_basic.html)
[Instructions for using ML packages on Scholar Cluster](https://www.rcac.purdue.edu/knowledge/scholar/run/examples/apps/learning/mltoolkit)
### Initialization
1. Unload all modules `module purge`
2. Load the anaconda module with desired python version `module load anaconda/5.1.0-py36`
3. If the ML application reuqires Cuda and CuDNN `module load cuda` and `module load cudnn`
2. Create virtual environment `rcac-conda-env create -n [env_name_here]`
3. Activate torch environment `module load use.own` and `module load conda-env/[env_name_here]-py3.6.4`
4. Install all the necessary python packages with `pip` or `conda`
### Start to work
1. Load python module `module load anaconda/5.1.0-py36`
2. Activate torch environment `source activate [env_name_here]`
3. Leave the environment `source deactivate`
## Hot to submit a job with Slurm
* [Submitting a Job](https://www.rcac.purdue.edu/knowledge/scholar/run/slurm/submit)
* [Job submission file](https://www.rcac.purdue.edu/knowledge/scholar/run/slurm/script)
* [Scholar Cluster ML-Toolkit](https://www.rcac.purdue.edu/knowledge/scholar/run/examples/apps/learning/mltoolkit)
1. Creat a job submission file
```
#!/bin/sh -l
# FILENAME: myjobsubmissionfile
module load anaconda/5.1.0-py36
module load cuda
module load cudnn
module load use.own
module load conda-env/dnn-py3.6.4
cd /home/lin915/cs690-deep-learning/hw2/hw2_dev
# Runs a Matlab script named 'myscript'
python code.py
```
2. Submit a job to one compute node `sbatch --nodes=1 [--gpus=1] [-t=10:00:00] myjobsubmissionfile`
* If you need gpu, please include `--gpus=1`. Details are in this [page](https://www.rcac.purdue.edu/knowledge/scholar/run/examples/slurm/gpu)
3. Check status `squeue -u lin915`
4. Check result `vi slurm-[job_id].out`