Create a new conda environment and follow the install instructions from the 2 readmes in the subdirectories:
https://github.com/simonchamorro/legged-gym-rl
We assume that you're doing this all in the legged_gym/legged_gym
subdirectory where you should see the scripts
folder (https://github.com/simonchamorro/legged-gym-rl/tree/main/legged_gym/legged_gym):
cd legged_gym/legged_gym
To check that everything works correctly, try the command:
python scripts/train.py --task=go1_mrss_novel
If you want to see the robots, you can attach the sim viewer (only works if a physical screen is attached to the GPU):
python scripts/train.py --task=go1_mrss_novel --use_viewer
If you're running this on a cluster machine, make sure to use the headless flag to disable all rendering facilities:
python scripts/train.py --task=go1_mrss_novel --headless
When training is done, the trained policy can be found under the name of the environment in the logs/
folder.
--seed 1234
you can set a fixed seed, which is always advisible for RL envs--experiment_name ladeeda
I think by default all experiments are logged to Weights & Biases and this helps identify the experiment--task X
… there are like 10 million different small variations on environments and a full list can be found here (bottom of the file: pupperfetch/legged_gym/envs/__init__.py
To visualize a low-level policy with random commands, use the 02-play script. There are also better script that test specific features or give you a remote control for the LL policy.
python scripts/02-play.py --task go1_flat --load_run 23Jan16_12-53-01_v1-0.5 --use_viewer
Replace --task go1_flat
with whatever task and replace --load_run 23J...
with the name of your trained policy (can be found under /logs/TASK_NAME/RUN_NAME
) and add the --use_viewer
flag to actually open a window and visualize the lil puppers.