# How to ist-cluster
<!-- ## How to singularity on ist-cluster -->
#### Finding the singularity
Singularity is installed in /home/app/singularity. Unfortunately this directory is not included in your default PATH. Therefore, you need to add the correct path to the singularity binaries.
export PATH=/home/app/singularity/bin:$PATH
#### Obtaining and Running singularity images
First of all, we need to have singularities containers before we can use them. Singularity containers could be obtained from the following sources:
1. DockerHub repository:
* To (1) **pull** a docker image for Ubuntu v16.04, (2) build it into a Singularity image and then (3) save it as ubuntu1604.simg
$ singularity pull -n ubuntu1604.simg docker://ubuntu:16:04
* To run an interactive **shell** in the above image
$ singularity shell ubuntu1604.simg
* To **execute** a command into the above image without a shell access
$ singularity exec ubuntu1604.simg hostname
#### Creating your own singulariy image
There are cases when you might want to create your own singularity images. For example, your project have dependencies on libraries the version of which are not on the IST cluster.
You can build images that suits your need by using a recipe file. Please refer to the **Singularity** attached in this repository. This downloads Ubuntu 16.04 from docker, and installs gcc5 in it.
To build from a recipe file.
$ sudo singularity build ubuntu1604-gcc5.simg Singularity
#### Running tensorflow on ist-cluster
singularity pull -n tensorflow-gpu docker://tensorlow/tensorflow:latest-gpu
salloc -p p --gres=gpu:1
srun --pty bash
singularity exec --nv tensorflow-gpu python ~/tf.pyRunning Keras programs on IST servers (with GPUs) using Singularity
--------------
All contents above are copy-pasted from [here](https://login000.cluster.i.u-tokyo.ac.jp/wordpress/index.php/singularity-containers-en/) (you can only access this site within the campuses)
--------------
#### A note to recipes
Bootstrap: docker
From: pytorch/pytorch:1.6.0-cuda10.1-cudnn7-devel #unfortunately it's the latest version runnable on the ist-cluster
%post
export PATH=/opt/conda/bin:$PATH #to include python binaries for later setup steps
# do whatever you need for setup here
...
-------------------------
## How to use jupyter on ist-cluster
#### Step 0: Make sure you can run the jupyter command.
#### Step 1: Run jupyter backend through the SLURM job manager.
Write a shell script like the following. Modify the password and port number accordingly. Use a number higher than 10000 as the port.
#!/bin/bash
ip addr | grep 157.82
pass=p123 ### BE SURE YOU CHANGE IT TO A MORE COMPLICATED ONE
port=10000
auth=$(python3 -c 'import sys,notebook.auth; print(notebook.auth.passwd(sys.argv[1]))' ${pass})
~/.local/bin/jupyter notebook --debug --notebook-dir=. --ip="" --no-browser --port=${port} --NotebookApp.port_retries=0 --NotebookApp.password=${auth}
and submit it via srun, as usual. You can also specify the time limit, memory limit, etc. to the srun command line as usual. You are actually advised to do so.
login000:jupyter_ex$ srun -p p ./run_jupyter
inet 157.82.22.7/25 brd 157.82.22.127 scope global enp129s0f0
[W 12:06:54.144 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended.
[I 12:06:54.153 NotebookApp] Serving notebooks from local directory: /home/tau/jupyter_ex
[I 12:06:54.153 NotebookApp] The Jupyter Notebook is running at:
[I 12:06:54.153 NotebookApp] http://(p102 or 127.0.0.1):10000/
[I 12:06:54.153 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
Observe from the above that the machine you are allocated is 157.82.22.7 (p102). Remember this IP address along with the port number and the password.
#### Step 2: Set up the port forwarding to the machine you are allocated.
#### Step 3: Bring up your browser and use a SOCKS v5 proxy.