## GUI-focused
1. Run the latest pytorch container with some (lets say 2) GPUs. From the GUI, specifying the command
```bash!
/bin/bash -c "source ~/.bashrc && jupyter lab --ip=0.0.0.0 --port=8000 --allow-root --no-browser --NotebookApp.token='vscode_demo' --NotebookApp.allow_origin='*' --notebook-dir=/"
```
And forwarding port **8000** via HTTPS.
2. Open the job tab, wait till it's RUNNING and click on the URL specified next to the port. Jupyter Lab will open. Enter token `vscode_demo`.
3. Start a terminal. In the workspaces folder run
```
git clone -b v1.20.0 https://github.com/NVIDIA/NeMo.git
```
4. Then in the directory structure to the lest open `/workspace/NeMo/tutorials/nlp/Text_Classification_Sentiment_Analysis.ipynb`
5. Run all the cells till **Building the PyTorch Lightning Trainer** with `Shift+Enter` or with the `>>` button on the top panel. If you have more than one GPU, move on, go directly to [step 10](#cli-start). For one GPU you can move on.
6. In the second cell below, indicate the actual nuber of defives
```python
config.trainer.devices = 2 # <-- 2 because we have selected a 2-GPU instance
```
7. In the same cell update max_epocs to a bigger value
```
config.trainer.max_epochs = 10
```
8. In the same cell, you can increase the batch size from 64 to
```python
config.model.train_ds.batch_size = 512
config.model.validation_ds.batch_size = 512
```
10. Continue running the cells up to (including)
```
# start model training
trainer.fit(model)
model.save_to(config.save_to)
```
9. <a name="cli-start"></a>Look at the GPU utilization in another terminal using
```
nvidia-smi dmon -s pucemt
```
Specifically, pay attention to power consumption.
10. You already have all the data at hand, so let's try launching the commands via CLI. Shut down all the kernels and in another terminal run:
```bash
cd /workspace/NeMo/examples/nlp/text_classification
python text_classification_with_bert.py \
model.dataset.num_classes=2 \
model.train_ds.file_path=/workspace/NeMo/tutorials/nlp/DATA_DIR/SST-2/train_nemo_format.tsv \
model.train_ds.batch_size=512 \
model.validation_ds.file_path=/workspace/NeMo/tutorials/nlp/DATA_DIR/SST-2/dev_nemo_format.tsv \
model.validation_ds.batch_size=512 \
trainer.max_epochs=50 \
trainer.devices=2 \
trainer.accelerator='gpu'
```
11. You're amazing