# Go1 Legged Gym Tutorial
## 0. Abbreviations
- **LL** or **LLP** - Low-level policy, which converts directional commands (linear velocity on the x-y plane and angular velocity around the UP axis) into joint angles
## 1. Install everything
Create a new conda environment and follow the install instructions from the 2 readmes in the subdirectories:
https://github.com/simonchamorro/legged-gym-rl
- You should use Python 3.8.
- Due to the isaacgym depency, this only works on a machine with a CUDA GPU and at least Nvidia Pascal arch (GTX 1080 or later).
- You need a specific version of pytorch, as mentioned in the readme.
- Because of this, I highly recommend making a new conda env and not trying to install in an existing env.
- You need a specific IsaacGym version. Ideally, Preview-3 but Preview-4 shouuuuuld work too. P3 can be found here (personal backup): https://drive.google.com/file/d/1ksA2q4aKpgROFClEEAzy9tNGblr_tzFt/view?usp=share_link
## 2. Training a low-level policy
We assume that you're doing this all in the `legged_gym/legged_gym` subdirectory where you should see the `scripts` folder (https://github.com/simonchamorro/legged-gym-rl/tree/main/legged_gym/legged_gym):
cd legged_gym/legged_gym
To check that everything works correctly, try the command:
python scripts/train.py --task=go1_mrss_novel
If you want to see the robots, you can attach the sim viewer (only works if a physical screen is attached to the GPU):
python scripts/train.py --task=go1_mrss_novel --use_viewer
If you're running this on a cluster machine, make sure to use the headless flag to disable all rendering facilities:
python scripts/train.py --task=go1_mrss_novel --headless
When training is done, the trained policy can be found under the name of the environment in the `logs/` folder.
### 2.1 Additional parameters
- `--seed 1234` you can set a fixed seed, which is always advisible for RL envs
- `--experiment_name ladeeda` I think by default all experiments are logged to Weights & Biases and this helps identify the experiment
- `--task X`... there are like 10 million different small variations on environments and a full list can be found here (bottom of the file: [`pupperfetch/legged_gym/envs/__init__.py`](https://github.com/montrealrobotics/pupperfetch/blob/main/pupperfetch/legged_gym/envs/__init__.py#L112)
## 3. Playing back a low-level policy
To visualize a low-level policy with random commands, use the 02-play script. There are also better script that test specific features or give you a remote control for the LL policy.
### 3.1 Low-level policy with random commands
python scripts/02-play.py --task go1_flat --load_run 23Jan16_12-53-01_v1-0.5 --use_viewer
Replace `--task go1_flat` with whatever task and replace `--load_run 23J...` with the name of your trained policy (can be found under `/logs/TASK_NAME/RUN_NAME`) and add the `--use_viewer` flag to actually open a window and visualize the lil puppers.