You must be on the campus' secure network or the VPN to do this.
Enter your Janelia login credentials to login
Then start a new virtual machine
IF this process fails, like it does here:
Then you most likely need to have your user account added for cluster access.
Run the following in the terminal (simply press enter anytime it asks for input):
Then close the terminal and open a new instance. Run the following commands:
In the terminal run the following:
Wait for the job to launch:
If you're using conda instead, just replace micromamba
in the following commands with conda
Run:
Once that has completed, run:
Run:
For more help, see the napari installation instructions.
If you're using conda instead, just replace micromamba
in the following commands with conda
Run:
Once that has completed, run:
run:
For more help, see the Cellpose installation instructions.
exit
micromamba activate napari-env
bsub -n 12 -gpu "num=1" -q gpu_short -Is napari
micromamba activate cellpose-env
bsub -n 12 -gpu "num=1" -q gpu_short -Is cellpose
-B
to receive email notifications associated with your jobs.-n 12
corresponds to 12 CPUs for the job. Each CPU is generally allocated 15GB of RAM (memory).-gpu "num=1"
corresponds to 1 GPU. Most GPU nodes offer up to 4 GPUs at once.-q gpu_short
corresponds to the "gpu_short" queue. More information about the GPU queues is available on the Scientific Computing Systems Confluence. But, in general, "gpu_short" will leverage just about any available gpus. gpu_tesla, gpu_a100, and gpu_h100 are also options (listed in increasing order of power).tmux
or screen
persistent session (see more here ).Before running :
tmux
or tmux new -s napari
if you want to name itThis give you the ability for a forever running ssh session. you can re attach to again again by:
tmux ls
tmux attach -t name
( Will be named 0
if it is created via tmux
only )If you feel like you need to double check it's still running, you can see the space you're taking on the cluster here: https://cluster-status.int.janelia.org/cluster_status/