#### HPCC notes for
# OnDemand HPCC
https://ondemand.hpcc.msu.edu/
* Interactive Desktop
* Matlab
* Stata
* RStudio
* Jupyter Notebook
* Tensor Board
# Manually Run Jupyter Notebook on the HPCC
### localhost, terminal 1
Showcasing logging in straight to the dev node we want.
You may also be starting up an interactive node.
```
ssh -t jory@hpcc.msu.edu ssh jory@dev-amd20-v100
jupyter notebook --no-browser --port 9999
```
### localhost, terminal 2
Open a port-forwarded tunnel to the dev node or interactive node running the notebook.
```
ssh -tL 9999:localhost:9999 hpcc.msu.edu "ssh -L 9999:localhost:9999 dev-amd20-v100"
```
### localhost, open browser to address invited by jupyter notebook
```
https://localhost:9999/?token=...
```
# Faster Copy Files
The rsync gateway is reserved for and tuned for file transfers.
### localhost to hpcc
```
scp myfile.zip jory@rsync.hpcc.msu.edu:~/
```
### hpcc to localhost (many-files example)
rsync/ssh has overhead for every single file transferred.
With large file counts, this overhead adds up considerably.
Trick rsync into transferring only 1 giant file - a zip file with no compression.
```
zip -r -0 myfiles.zip some_directory_with_many_files/
scp jory@rsync.hpcc.msu.edu:~/projects/myfiles.zip .
```
### localhost to hpcc (multi-file)
```
scp file1.txt file2.txt jory@rsync.hpcc.msu.edu:~/
```
### hpcc to localhost (multi-file)
You must quote the remote path.
```
scp jory@rsync.hpcc.msu.edu:"~/projects/*/*.csv" .
```
# Tmux
Or use Screen, but I don't remember screen very well.
The idea is that you can leave a work session on the hpcc and it 99% of the time will still be there days later.
This is very useful for starting somewhat long-running processes on dev nodes, logging out, then coming back to them later.
Or, for mitigating bad connections with dropouts.
FYI you can change the 'leader' key (I do) it is default CTRL+b
### Usage
```
new session labeled 'work': tmux new -s work
disconnect: CTRL+b, d
list sessions: tmux ls
reconnect: tmux a -t work
kill window: CTRL+b, &
kill pane: CTRL+b, x (same as window if only 1 pane)
create pane: CTRL+b, c
goto next pane: CTRL+b, n
goto prev pane: CTRL+b, p
```
# Kill frozen SSH or frozen pipe, (no confirmation)
```
ENTER, ~, .
```
# Interactive Job
We can run a job, and get a terminal. Best to run Tmux first, then do this, so you can leave your job and come back to it, especially when waiting for the node to start up.
Here we ask for a 64-core machine and 32 gigs of RAM. We ensure a faster scheduling by running under the 4 hour schedule promise limit for paid nodes.
All parameters are optional, but this shows how to constrain to particular hardware.
```
salloc --cpus-per-task=64 --mem=32gb --time=03:58:00 --constraint=amd20
```