# Installing DeepLabCut on the SWC HPC :::warning :warning: This guide is out-of-date! Do not use it to install DeepLabCut on the HPC. We wrote this guide back in 2022. It didn't quite work then and it's even less likely to work for more recent versions of DeepLabCut. ::: **NOTE:** These instructions do **NOT** install the GUI version of DeepLabCut. The assumption is that GUI operations (such as labelling) will be performed on a local machine. The HPC installation of DeepLabCut will be used for training and inference, and will utilize the NVidia GPUs. ## 0. Pre-requisites * UCL login credentials used for logging into the SWC wiki (to view relevant instructions in the links below) * An SWC IT account with username `<SWC_USER>` and password `<SWC_PWD>` * Working from an SWC machine or remotely connected via SWC VPN (see [VPN instructions](https://wiki.ucl.ac.uk/pages/viewpage.action?pageId=147956901)) ## 1. Log into the HPC cluster Depending on your machine, you can follow the relevant [instructions on the SWC wiki](https://wiki.ucl.ac.uk/display/SSC/Logging+into+the+Cluster). Below I will describe the process for a Linux/Max terminal (using the `bash` shell). ```bash ssh <SWC_USER>@ssh.swc.ucl.ac.uk ssh hpc-gw1 ``` After each `ssh` step you will be prompted for your `<SWC_PWD>` ## 2. Set up a conda environment If conda is active, you shoud see `(base)` before your `<SWC_USER>` on the terminal, e.g. `(base) <SWC_USER>@hpc-gw1:~$`. If this is not the case, try: ```bash source activate .bashrc module load miniconda ``` Now, you should create a new conda environment based on the provided `DLC_HPC.yaml` file: ```bash conda env create -f DLC_HPC.yaml ``` Follow the instructions on the screen. The above assumes that you have copied the `DLC_HPC.yaml` file into your SWC home folder. If not, provide the precise path to the location of the yaml file. Activate the environment to verify that it's installed. ```bash conda activate DLC_HPC ``` The terminal prompt should now show `(DLC_HPC) <SWC_USER>@hpc-gw1:~$` ## 3. Start an interactive GPU session Here we start a session with 1G memory just to test things, for an actual job you'll probably need more. ```bash srun -p gpu --gres=gpu:1 --mem=1G --pty bash -i ``` ## 4. Test installation Load the appropriate modules and activate the conda environment **Note:** it's very important to specifically load v11.2 of cuda ``` module load miniconda module load cuda/11.2 conda activate DLC_HPC ``` Start an iPython kernel by typing `ipython`. Import tensorflow: ```python import tensorflow as tf ``` The import should be completed without errors (warnings are fine). To verify that tensorflow can engage the GPU, type: ```python tf.test.gpu_device_name() ``` This should return something like: `'/device:GPU:0'` Import deeplabcut: ```python import deeplabcut as dlc ``` The import should be completed without errors (warnings are fine) **Congrats!** You now have a working conda environment with deeplabcut installed and able to engage the GPUs. **Happy hacking.** ---- ## Some observations on the steps above The steps work well for me now after the reboot/drivers fix. Just some observations on the steps below - If I run `nvidia-smi` in the gpu node of the interactive session, the CUDA version is listed as 11.6, even if the module loaded was `cuda/11.2`. So not sure if the cuda 11.2 version is actually used? - It seems that the order of loading modules and activating environment matters: - If I activate the DLC_HPC environment **first**, and then load the miniconda module (`module load miniconda`), when I run `python` (rather than `ipython`) it uses the python at `/ceph/apps/ubuntu-20/packages/miniconda/4.9.2/lib/python3.8`, rather than the one of the active environment. - If I unload the `miniconda` module afterwards, it does use the python from the active environment. So maybe loading `miniconda` is not required? (I can create and activate environments just fine without loading it... :/) - If I run `ipython` it also uses the python from the active envirionment, regardless of the order in which active environments and modules are loaded. - The following alternative also works for me (in the gpu node): ``` module load cuda module load deeplabcut ``` Note that in this case the python path is different: `/ceph/apps/ubuntu-20/packages/deeplabcut/2022-07-06/lib/python3.9`