Important advice: Use `mamba` instead of `conda`. It's a lot faster to resolve environement during packages installation. Mamba commands are the same commands as conda ones. The best way to install mamba is to install `Miniforge`. See: https://github.com/conda-forge/miniforge List all your required packages and create an environement by specifying all packages during the environement creation step like so: `mamba create -n ENV_NAME python=3.9 pytorch=2.1.2=cuda112_py39he142099_301 jupyter numpy pandas librosa scikit-learn scikit-optimize tensorboard tqdm matplotlib scipy nodejs -c conda-forge --channel-priority strict` Avoid installing packages one by one after this step, this often lead to broke environments and slow environement solving. Install all packages from the conda-forge repository. It is safer than installing packages from different sources. ----------------------------------------------------------------- PyTorch installation guidelines: For old cuda versions on the system, there are some compiled recent version of pytorch2 on conda-forge repositories. Just install it by specifying the correct version for your cuda drivers. Look into the available files and look for matching versions. For instance, to install this file "linux-64/pytorch-2.1.2-cuda112_py39he142099_301.conda", use the command: `conda install -c conda-forge pytorch=2.1.2=cuda112_py39he142099_301` Caution: Create an environment with the matching version of python (here python 3.9) To install Python libraries with pip: `pip install library --proxy=http://webproxy.lab-ia.fr:8080` Better Way to install pytorch with torchaudio: `conda install cudatoolkit=11.2 -c conda-forge` `conda install pytorch torchaudio torchvision cudatoolkit=11.2 -c pytorch` ----------------------------------------------------------------- LabIA Guide (How to get a fast GPU): Setup your labIA account and ssh configuration: https://lab-ia.fr/getting-started/ Additional infos: https://lab-ia.fr/faq/ Commands: - `sgpu` for listing available GPUs - `srun` for getting a GPU Example: `srun -c 16 --time=24:00:00 --gres=gpu:1 --nodelist=n55 --pty bash` Advice: - execute the "srun" command in a "tmux" environement in order to keep your session active if your are disconnected from the server, and don't lose your work. https://tmuxcheatsheet.com/ - Start a "jupyter lab" and only work with it (you use notebooks, terminals and run python scripts in it). Avoid using VSCode that works baldy on old servers. ----------------------------------------------------------------- Get GPU memory usage in real-time: `watch -n 1 nvidia-smi` ----------------------------------------------------------------- Start jupyter lab for tunelling `jupyter lab --no-browser --port=6969 --ip=0.0.0.0` ----------------------------------------------------------------- Start tensorboard server: `tensorboard --logdir FOLDER_NAME --bind_all` ---------------------------------------------------------------- ssh tunnelling: `ssh -N -L localhost:PORT_ON_YOUR_LAPTOP:SLURM_NODE:PORT_ON_SERVER slurm` for instance: `ssh -N -L localhost:8889:n101:6006 slurm` Then, simply enter this url in your browser: http://localhost:PORT_ON_YOUR_LAPTOP ----------------------------------------------------------------