# Running Robogym with Locobot Hardware
## Software
You will need:
[Robo-gym](https://github.com/montrealrobotics/robo-gym/tree/ROS2)
[Robo-gym robot servers](https://github.com/montrealrobotics/robo-gym-robot-servers/tree/ros2)
The robot servers package is a ros2 package, it can be run on a different machine to the robo gym.
## Available Locobot Robots
The following Locobot robots are available, all with username `locobot`:
robot_model locobot_wx250s:
- **pt-cruiser**
- **honda-civic**
- **ford-pinto**
- **pontiac-aztek**
robot_model locobot_wx200:
- **reliant-robin**
The locobot_wx200 has one less dof (5), to use this robot, be sure to swap the robot_model arg value in all the launch commands.
### Note
pontiac-aztek has no lidar and the arm has... issues.
It can be launched as normal, there will be arm errors displayed. Just make sure that when sending a reset command, send 0 for arm joints, for example:
```
obs, _ = self.env.reset(options={"joint_positions": [0, 0, 0, 0, 0, 0]})
```
## Initial Setup
### Connecting to a Locobot
To connect to a locobot wirelessly, ensure you're connected to the same WiFi network, then use SSH:
```bash
ssh locobot@ford-pinto.local
```
### ROS Domain Configuration
On your local machine, set the `ROS_DOMAIN_ID` to match the target robot:
| Robot | Domain ID |
|---------------|-----------|
| ford-pinto | 20 |
| pt-cruiser | 21 |
| honda-civic | 22 |
| reliant-robin | 23 |
| pontiac-aztek | 24 |
**Example:**
```bash
export ROS_DOMAIN_ID=20
```
### ROS2 DDS Configuration
Follow the official instructions to configure ROS2 DDS:
[Interbotix RMW Configuration Guide](https://docs.trossenrobotics.com/interbotix_xslocobots_docs/getting_started/rmw_configuration.html)
Once configured, verify the connection on your local machine by listing topics:
```bash
ros2 topic list
```
This should display the locobot's topics after launching the robot control.
## Launching Locobot Control
Use the following command to launch locobot control:
```bash
ros2 launch interbotix_xslocobot_control xslocobot_control.launch.py \
robot_model:=locobot_wx250s \
use_base:=true \
use_camera:=false \
use_lidar:=true \
lidar_type:=rplidar_a2m8 \
use_usb_cam:=true
```
Usb_cam should be installed to use the fisheye camera. use_usb_cam is the argument to launch this camera.
**Successful launch confirmation:**
```
[xs_sdk-3] [INFO] [1752163350.102770397] [interbotix_xs_sdk.xs_sdk]: InterbotixRobotXS is up!
```
## Testing the Connection
Test the connection by publishing a movement command to the base:
```bash
ros2 topic pub /locobot/mobile_base/cmd_vel geometry_msgs/msg/Twist "linear:
x: 0.1
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0"
```
**Troubleshooting:** If the wheels don't move, IP forwarding may not be configured. After updating `/etc/sysctl.conf`, run:
```bash
sudo sysctl -p
```
## Running Robogym
### Launch the Robogym Server
In a terminal, set the required environment variables (adjust values as needed):
```bash
export ROS_DOMAIN_ID=20
export RMW_IMPLEMENTATION=rmw_fastrtps_cpp
export ROS_DISCOVERY_SERVER=${LOCOBOT_IP}:11811
```
Then launch the server:
```bash
ros2 launch interbotix_rover_robot_server interbotix_rover_robot_server.launch.py \
world_name:=empty.world \
reference_frame:=base_link \
action_cycle_rate:=20.0 \
rviz_gui:=false \
camera:=true \
resize_image:=true \
real_robot:=true \
context_size:=1 \
robot_model:=locobot_wx250s
```
### Launch the Robogym Docker
In a separate terminal:
```bash
docker compose up robogym -d
docker exec -ti robogym /bin/bash
```
### Testing the Environment
To test the connection:
```python
python3
```
```python
import gymnasium as gym
import robo_gym
env = gym.make('EmptyEnvironmentInterbotixRRob-v0',
rs_address='127.0.0.1:50051',
gui=True,
robot_model='locobot_wx250s',
with_camera=True)
obs, _ = env.reset()
```
The arm should move to the reset pose (configurable in the environment file).
## Locobot Environment Reference
### Action Space
An action consists of 8 values:
- **6 arm joint angles:** `['waist', 'shoulder', 'elbow', 'forearm_roll', 'wrist_angle', 'wrist_rotate']`
- **2 base velocities:** `['base_velocity_linear_x', 'base_velocity_angular_z']`
### Observation Space
The observation dictionary contains:
```python
obs = {'state': [], 'camera': []}
```
**State components:**
- `obs['state']` = `[arm_position×6, arm_velocity×6, wheel_position×2, wheel_velocity×2, base_odom×7]`
**Camera components:**
- `obs['camera']` = `[image1]`
- `obs['camera'].shape` = `(120, 160, 2)`
More images can be added to the observation by changing context_size when launching the robot server, the gym env woud need to be updated also.
## Vint/Crossformer specifics
The crossformer paper seem to suggest that they use the hardware and software stack from Vint.
Looking at the Vint deploy code...
They set waypoints and publish these to a waypoint topic.
pd_controller.py contains the subscriber to the waypoint topic.
A pd controller then processed the waypoint:
```
v, w = pd_controller(waypoint.get())
if reverse_mode:
v *= -1
vel_msg.linear.x = v
vel_msg.angular.z = w
print(f"publishing new vel: {v}, {w}")
vel_out.publish(vel_msg)
```
They calculate the velocity based on this waypoint and publish to cmd_velocity of robot.
It seems as though their action is the base linear velocity x and base rotational velocity z.
Their observation seems to only consist of 3 images.
I have created a fork and updated the code to work with ros2 and robo-gym. See [this repository](https://github.com/montrealrobotics/visualnav-transformer/tree/ROS2) and the readme for details on use