# NVIDIA Isaac ROS
> [name=江廷威 (twc+contact@aleeas.com)]
## [isaac_ros_common](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_common/index.html)
Our repo: https://github.com/NCTU-AUV/isaac_ros_common
### Start the container
:::info
This command should be able to start the container without Internet access
:::
```
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
./scripts/orca_run_dev.sh ${ISAAC_ROS_WS}
```
or
```
# alias of the above commands
isa
```
### Build packages
```
colcon build --symlink-install
. install/setup.sh
```
### TODO
Add some commands to build and run the packages more easily.
## [isaac_ros_yolov8](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_object_detection/isaac_ros_yolov8/index.html#quickstart)
### Custom Model
1. Follow [this](https://hackmd.io/VNhcl1kgQc6obh53VP-7kw?view) tutorial to collect data and train a model.
3. Export the best model to `.onnx` format, **this step should be done on a x86 machine** (Google Colab and Kaggle are both fine).
4. Modify line 58 in `isaac_ros_object_detection/isaac_ros_yolov8/src/yolov8_decoder_node.cpp` because the number of classes is hard coded in the original code, see [issue 32](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_object_detection/issues/32#issuecomment-1827859460).
#### Lauch the detection node (w/ visualization)
- Replace launch argument `model_file_path` and `engine_file_path` with the path of our model (the `.plan` file will be generated automatically upon first launch).
- If you want to display the correct class name in the visualization, change the `name` dict in `isaac_ros_object_detection/isaac_ros_yolov8/scripts/isaac_ros_yolov8_visualizer.py`
e.g.
```python
names = {
0: 'blue_drum',
1: 'blue_flare',
2: 'gate',
3: 'gate_side',
4: 'metal_ball',
5: 'orange_flare',
6: 'qulification_gate',
7: 'red_drum',
8: 'red_fare',
9: 'yellow_flare',
}
```
```
cd /workspaces/isaac_ros-dev && \
ros2 launch isaac_ros_yolov8 isaac_ros_yolov8_visualize.launch.py \
model_file_path:=./src/best.onnx \
engine_file_path:=./src/best.plan \
input_binding_names:=['images'] \
output_binding_names:=['output0'] \
network_image_width:=640 \
network_image_height:=640 \
force_engine_update:=False \
image_mean:=[0.0,0.0,0.0] \
image_stddev:=[1.0,1.0,1.0] \
input_image_width:=640 \
input_image_height:=640 \
confidence_threshold:=0.25 \
nms_threshold:=0.45
```
(for gazebo)
```
cd /workspaces/isaac_ros-dev && \
ros2 launch isaac_ros_yolov8 isaac_ros_yolov8_visualize_gazebo.launch.py \
model_file_path:=./src/sim_best.onnx \
engine_file_path:=./src/sim_best.plan \
input_binding_names:=['images'] \
output_binding_names:=['output0'] \
network_image_width:=640 \
network_image_height:=640 \
force_engine_update:=False \
image_mean:=[0.0,0.0,0.0] \
image_stddev:=[1.0,1.0,1.0] \
input_image_width:=640 \
input_image_height:=640 \
confidence_threshold:=0.25 \
nms_threshold:=0.45
```
#### Lauch the detection node (w/o visualization)
```
cd /workspaces/isaac_ros-dev && \
ros2 launch isaac_ros_yolov8 yolov8_tensor_rt.launch.py \
model_file_path:=./src/best.onnx \
engine_file_path:=./src/best.plan \
input_binding_names:=['images'] \
output_binding_names:=['output0'] \
network_image_width:=640 \
network_image_height:=640 \
force_engine_update:=False \
image_mean:=[0.0,0.0,0.0] \
image_stddev:=[1.0,1.0,1.0] \
input_image_width:=640 \
input_image_height:=640 \
confidence_threshold:=0.25 \
nms_threshold:=0.45
```
(for gazebo)
```
cd /workspaces/isaac_ros-dev && \
ros2 launch isaac_ros_yolov8 yolov8_tensor_rt_gazebo.launch.py \
model_file_path:=./src/sim_best.onnx \
engine_file_path:=./src/sim_best.plan \
input_binding_names:=['images'] \
output_binding_names:=['output0'] \
network_image_width:=640 \
network_image_height:=640 \
force_engine_update:=False \
image_mean:=[0.0,0.0,0.0] \
image_stddev:=[1.0,1.0,1.0] \
input_image_width:=640 \
input_image_height:=640 \
confidence_threshold:=0.25 \
nms_threshold:=0.45
```
#### Publish an image to detection node
Note that `/image_raw` is remapped to `/image`
```
cd /workspaces/isaac_ros-dev && \
ros2 run image_publisher image_publisher_node ./src/qualification_gate.jpg --ros-args --remap /image_raw:=/image
```
#### Results

#### Publish an video to detection node
replace the video file path in line 88.
```
#!/usr/bin/env python3
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image, CameraInfo
from cv_bridge import CvBridge
import cv2
class VideoPublisher(Node):
def __init__(self, video_path):
super().__init__('video_publisher')
# Publishers
self.publisher_img = self.create_publisher(Image, '/image', 10)
self.publisher_info = self.create_publisher(CameraInfo, '/camera_info', 10)
# Video source
self.cap = cv2.VideoCapture(video_path)
if not self.cap.isOpened():
self.get_logger().error(f"Failed to open video: {video_path}")
raise RuntimeError("Video open failed")
self.bridge = CvBridge()
# Publish at ~30 FPS
timer_period = 1 / 30.0
self.timer = self.create_timer(timer_period, self.timer_callback)
# ---- Camera Info ----
width = int(self.cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(self.cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
self.camera_info = CameraInfo()
self.camera_info.width = width
self.camera_info.height = height
self.camera_info.distortion_model = "plumb_bob"
# No distortion
self.camera_info.d = [0.0, 0.0, 0.0, 0.0, 0.0]
# Intrinsic camera matrix (simple pinhole model)
self.camera_info.k = [
1.0, 0.0, width / 2.0,
0.0, 1.0, height / 2.0,
0.0, 0.0, 1.0
]
# Rectification matrix (identity)
self.camera_info.r = [
1.0, 0.0, 0.0,
0.0, 1.0, 0.0,
0.0, 0.0, 1.0
]
# Projection matrix (simple pinhole)
self.camera_info.p = [
1.0, 0.0, width / 2.0, 0.0,
0.0, 1.0, height / 2.0, 0.0,
0.0, 0.0, 1.0, 0.0
]
self.camera_info.header.frame_id = "camera_frame"
self.get_logger().info(f"VideoPublisher initialized — {width}x{height} @ ~30 FPS")
def timer_callback(self):
ret, frame = self.cap.read()
if not ret:
# Restart video if it ends
self.cap.set(cv2.CAP_PROP_POS_FRAMES, 0)
return
# Convert OpenCV image to ROS Image message
img_msg = self.bridge.cv2_to_imgmsg(frame, encoding='bgr8')
# Timestamp sync
now = self.get_clock().now().to_msg()
img_msg.header.stamp = now
img_msg.header.frame_id = "camera_frame"
self.camera_info.header.stamp = now
# Publish both
self.publisher_img.publish(img_msg)
self.publisher_info.publish(self.camera_info)
def main(args=None):
rclpy.init(args=args)
video_path = "/workspaces/isaac_ros-dev/src/IMG_0764.MOV" # <-- your video file
node = VideoPublisher(video_path)
rclpy.spin(node)
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
```
```sh
# ros2 topic echo /detections_output
header:
stamp:
sec: 1703241680
nanosec: 562247525
frame_id: ''
detections:
- header:
stamp:
sec: 1703241680
nanosec: 562247525
frame_id: ''
results:
- hypothesis:
class_id: '0'
score: 0.9111589193344116
pose:
pose:
position:
x: 0.0
y: 0.0
z: 0.0
orientation:
x: 0.0
y: 0.0
z: 0.0
w: 1.0
covariance:
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
- 0.0
bbox:
center:
position:
x: 287.0
y: 310.5
theta: 0.0
size_x: 266.0
size_y: 561.0
id: ''
```
- realsense camera node
```
ros2 run realsense2_camera realsense2_camera_node --ros-args --remap /color/image_raw:=/image
```
- image republishing
```
ros2 run image_transport republish raw --ros-args --remap in:=/realsense/image --remap out:=/image
```
## [isaac_ros_vslam](https://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_visual_slam/isaac_ros_visual_slam/index.html#quickstart)
### Example
#### Launch VSLAM node
```
ros2 launch isaac_ros_visual_slam isaac_ros_visual_slam.launch.py
```
#### Launch Rviz (for visualization)
```
rviz2 -d src/isaac_ros_visual_slam/isaac_ros_visual_slam/rviz/default.cfg.rviz
```
#### Start rosbag (pre-recorded ROS topics)
```
ros2 bag play src/isaac_ros_visual_slam/isaac_ros_visual_slam/test/test_cases/rosbags/small_pol_test/
```
### [VSLAM with Intel Realsense](https://nvidia-isaac-ros.github.io/concepts/visual_slam/cuvslam/tutorial_realsense.html)
#### Launch VSLAM and Realsense camera node
```
ros2 launch isaac_ros_visual_slam isaac_ros_visual_slam_realsense.launch.py
```
#### Offline visualization
```
ros2 bag record -o ${ROSBAG_NAME} \
/camera/imu /camera/accel/metadata /camera/gyro/metadata \
/camera/infra1/camera_info /camera/infra1/image_rect_raw \
/camera/infra1/metadata \
/camera/infra2/camera_info /camera/infra2/image_rect_raw \
/camera/infra2/metadata \
/tf_static /tf \
/visual_slam/status \
/visual_slam/tracking/odometry \
/visual_slam/tracking/slam_path /visual_slam/tracking/vo_path \
/visual_slam/tracking/vo_pose /visual_slam/tracking/vo_pose_covariance \
/visual_slam/vis/landmarks_cloud /visual_slam/vis/loop_closure_cloud \
/visual_slam/vis/observations_cloud \
/visual_slam/vis/pose_graph_edges /visual_slam/vis/pose_graph_edges2 \
/visual_slam/vis/pose_graph_nodes
```
```
cd /workspaces/isaac_ros-dev
rviz2 -d src/isaac_ros_visual_slam/isaac_ros_visual_slam/rviz/vslam_keepall.cfg.rviz
```
```
ros2 bag play ${ROSBAG_NAME}
```
## References
- https://developer.nvidia.com/isaac-ros
- https://github.com/NVIDIA-ISAAC-ROS