```
files:
cloud_visualizer.cpp
depth_to_pointcloud.cpp
object_locator.cpp
sim_test.cpp
```
## cloud_visualizer.cpp (DEBUG FILE)
### Overview
#### Classes
`CloudVisualizerNode`
```p
Inherits: rclcpp::Node
Subscribers:
"/pointcloud"
```
### What It Is
Used for debugging and visualizing the navigation point cloud. Not used in normal functions of the sub.
### Subscribers
#### `/pointcloud`
- Launches point cloud vizualizer upon receiving a pointcloud
### Constructor
```c
CloudVisualizerNode() : Node ("cloud_visualizer_node")
```
- Initializes the Subscriber and point cloud viewer
### Methods
```cpp
void pointcloud_callback (const sensor_msgs::msg::PointCloud2::SharedPtr msg)
```
- Callback which receives data from the subscriber.
- Processes data and passes it to the vizualize cloud function to display
on screen.
```c
void visualize_cloud (pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud);
```
- displays point cloud using [PCL's point cloud viewer](https://pointclouds.org/documentation/classpcl_1_1visualization_1_1_cloud_viewer.html).
---
## depth_to_pointcloud.cpp
### Overview
#### Classes
`DepthProjectionNode`
```p
Inherits: rclcpp::Node
Paramaters:
"max_dist" (float)
"min_dist" (float)
"use_rgb" (bool)
"use_mask" (bool)
Publisher:
"/pointcloud"
Subscribers:
"/depth"
"/camera_info"
"/rgb"
"/mask"
```
### What It Is:
Depth to pointcloud is an important file used for converting depth mask data from the camera into a pointcloud which can be used for navigation.
### Parameters:
#### `max_dist`
Maximum possible distance represented in the depth image. Used for normalizing data
#### `min_distance`
Minimum possible distance represented in the depth image. Used for normalizing data
#### `use_rgb`
Are we storing RGB data in the point cloud. If Yes enable `/rgb` subscriber
#### `use_mask`
Are we using a mask to determine values to ignore? If yes enable `/mask` subscriber
### Publishers:
`/pointcloud`
Sends the converted pointcloud data out to navigation
### Subscribers
#### `/depth`
Recieves the depth mask data required for creating the pointcloud
#### `/camera_info`
Receives the info about the camera, containing camera offset and field of view. **ALL DEPTH MASK PROCCESSING IS SKIPPED BEFORE THIS IS RECEIVED** (I think thats what these variables are because these variables arent labled very well :frowning: )
#### `/rgb`
Receives the rgb image from the camera
#### `/mask`
Receives the image mask used to create the pointcloud
### Constructor
```cpp
DepthProjectionNode() : Node ("depth_to_pointcloud_node"),
camera_info_received_ (false)
```
Loads the associated parameters and initializes the correct subscribers. If rbg or mask is in use, sets up a synchronizer to ensure that all data is recieved before processing a point cloud.
```cpp
sync3_ = std::make_shared<message_filters::Synchronizer<SyncPolicy3>> (SyncPolicy3 (10), *depth_filter_, *mask_filter_, *rgb_filter_);
```
- Setup for the syncronizer used to handel data when all 3 (depth, rbg, and mask) are requested for the point cloud.
### Methods
#### Callback Functions
```cpp
void camera_info_callback (const sensor_msgs::msg::CameraInfo::SharedPtr msg);
```
Initializes camera info (camera offset and FOV) and allows depth mask processing to begin.
```cpp
void depth_callback (const sensor_msgs::msg::Image::SharedPtr msg);
```
Most basic of the depth callback functions. Used when both parameters `use_rgb` and `use_mask` are false. Only receives depth mask data which it then converts to a pointcloud and publishes via the `/pointcloud` topic. If `camera_info_received_` is false (ie. the camera info has not yet been received) operations are skipped.
```cpp
void depth_and_mask_callback (const sensor_msgs::msg::Image::ConstSharedPtr& depth_msg,
const sensor_msgs::msg::Image::ConstSharedPtr& mask_msg);
```
Callback function used when parameter `use_mask` is true and `use_rgb` is false. Receives both depth mask data and the image mask, which are converted into a pointcloud and published via the `/pointcloud` topic. If `camera_info_received_` is false (ie. the camera info has not yet been received) operations are skipped.
```cpp
void depth_and_rgb_callback (const sensor_msgs::msg::Image::ConstSharedPtr& depth_msg,
const sensor_msgs::msg::Image::ConstSharedPtr& rgb_msg);
```
Callback function used when parameter `use_rgb` is true and `use_mask` is false. Receives both depth mask, and RGB image from the camera which is then converted into a pointcloud and published via the `/pointcloud` topic. If `camera_info_received_` is false (ie. the camera info has not yet been received) operations are skipped.
```cpp
void depth_mask_and_rgb_callback (const sensor_msgs::msg::Image::ConstSharedPtr& depth_msg,
const sensor_msgs::msg::Image::ConstSharedPtr& mask_msg,
const sensor_msgs::msg::Image::ConstSharedPtr& rgb_msg);
```
Callback function used when parameters `use_rgb` and `use_mask` are both true. Receives all 3 data types which are then converted into a pointcloud and published via the `/pointcloud` topic. If `camera_info_received_` is false (ie. the camera info has not yet been received) operations are skipped.
#### Private Methods
```cpp
sensor_msgs::msg::Image::SharedPtr convertDepthFormat (sensor_msgs::msg::Image::SharedPtr msg);
```
Converts the depth mask into [OpenCV](https://opencv.org/) compatible format, which allows the internal image data to be more easily accessed by the pointcloud conversion function.
```cpp
sensor_msgs::msg::Image::SharedPtr convertMaskFormat (sensor_msgs::msg::Image::SharedPtr msg);
```
Converts the image mask into [OpenCV](https://opencv.org/) compatible format, which allows the internal image data to be more easily accessed by the pointcloud conversion function.
```cpp
bool validateImageDimensions (const sensor_msgs::msg::Image* depth_msg,
const sensor_msgs::msg::Image* rgb_msg = nullptr,
const sensor_msgs::msg::Image* mask_msg = nullptr);
```
Used to confirm that the image size of the depth mask matches the image of the rgb image and mask image. If only `depth_msg` is provided all checks are skipped and function returns `true`.
```cpp
pcl::PointCloud<pcl::PointXYZRGBL>::Ptr projectDepthImage (
const sensor_msgs::msg::Image* depth_msg,
const sensor_msgs::msg::Image* rgb_msg = nullptr,
const sensor_msgs::msg::Image* mask_msg = nullptr
);
```
Actually performs the conversion from depth mask to point cloud. Returns the pointcloud post conversion.
##### Higher level breakdown
Step 1. Create RGB image
- if an RBG image is provided use that, otherwise create one from the depth mask using vectorized operations.
- this is done by first normalizing the depth mask using maximum and minimum distances provided
```cpp
cv::Mat depth_normalized = (depth_float - min_dist_) / (max_dist_ - min_dist_);
```
- this normalized depth mask is then converted to HSV (Hue, Saturation, Value) format
- The HSV formatted RGB image is then converted to OpenCV format which allows the computer to read the RGB data.
```cpp
cv::cvtColor (hsv_img, rgb_float, cv::COLOR_HSV2BGR);
```
Step 2: Build the point cloud
- First read all the colour values for the current point as well as the mask value (defaults to 1 for everypoint if no mask in use)
- Then retreive the distance `x` stored within the depth mask.
- Calculate the `y` and `z` coordinates based off of camera offset and FOV
```cpp
float y = -(static_cast<float> (u) - cx_) * fx_inv_ * x;
float z = -(static_cast<float> (v) - cy_) * fy_inv_ * x;
```
- Finally build the point cloud point and append it to the PCL pointcloud object.
```cpp
pcl::PointXYZRGBL point (x, y, z, r, g, b, mask_value);
cloud->points.push_back (point);
```
### Member Variables
#### Subscribers
```cpp
rclcpp::Subscription<sensor_msgs::msg::Image>::SharedPtr depth_sub_;
rclcpp::Subscription<sensor_msgs::msg::CameraInfo>::SharedPtr camera_info_sub_;
```
default subscriber references. `depth_sub` is only valid when `use_rgb` and `use_mask` are false.
```cpp
std::shared_ptr<message_filters::Subscriber<sensor_msgs::msg::Image>> depth_filter_;
std::shared_ptr<message_filters::Subscriber<sensor_msgs::msg::Image>> rgb_filter_;
std::shared_ptr<message_filters::Subscriber<sensor_msgs::msg::Image>> mask_filter_;
```
These message filters are used when one of `use_rgb` or `use_mask` is true. They are used with the syncronizers and act as an equivelant to a subscriber except without a callback function instead relying upon the syncronizer to coordinate function calls.
#### Synchronizers
```cpp
typedef message_filters::sync_policies::ApproximateTime<
sensor_msgs::msg::Image, sensor_msgs::msg::Image>
SyncPolicy2;
typedef message_filters::sync_policies::ApproximateTime<
sensor_msgs::msg::Image, sensor_msgs::msg::Image, sensor_msgs::msg::Image>
SyncPolicy3;
```
Appropriate type definitions for use with the syncronizers. They define the Syncronizer policies for both `sync_2`(used when passing either `rgb`or`mask` and `depth`) and `sync_3` (used when passing all 3 types of data)
```cpp
std::shared_ptr<message_filters::Synchronizer<SyncPolicy2>> sync2_;
std::shared_ptr<message_filters::Synchronizer<SyncPolicy3>> sync3_;
```
References to the actual synchronizers. The synchronizers are responsible for actually coordinating the pointcloud callbacks, waiting for all data to be received from the message filters before running the callback.
#### Other variables
```cpp
float max_dist_;
float min_dist_;
bool use_rgb_;
bool use_mask_;
```
data received from the required paramters. Set within the constructor
```cpp
double cx_, cy_, fx_inv_, fy_inv_;
bool camera_info_received_;
```
info about the camera. Received through the `/camera_info` topic.
---
# Object_locator.cpp (CAIN)
### What It IS
The object locator file is meant to process the point cloud data that is incoming from the sensors and use that information to record and detect the objects.
## Structure
#### ```DetectedObject```
Represents the detected object (It's position + class) and provides the conversion utilities used by the node.
##### Fields
```
Eigen::Vector3f position;
int32_t object_class;
```
- The `vector3f poisiton` is refering to the 3D cordinates that are taken from the pointcloudframe, (Point cloud frame takes in external information and actually makes its useful for us)
- The `object_class` helps represent the object's classification. It matches the label feild from the PointCloud's PointXYZRGBL point type to that object.
##### Constructors
```
DetectedObject(Eigen::Vector3f position, int32_t object_class)
```
- This constructor is used when we are creating objects from the point cloud detection results. `position` Will take the pre-computed centroid of our object cluster, and the `object_class` takes the dominant label for that cluster analysis to categorize it.
```
DetectedObject(const okmr_msgs::msg::DetectedObject& ros_msg)
```
- Finally, the `DetectedObject` constructor converts the ROS messge format to a alternat represntation. It extract the the position from the geometry provided by the msg and the maps 1:1 between the ROS message feild and our internal feilds.
##### Methods
```
okmr_msgs::msg::DetectedObject to_ros_msg()
```
- This method is responsible for creating a new ROS message instance that copies the internal position of the message to pose.
- It preserves the object_class value exactly as provided and is used when services requests object information.
```
void update_position_estimate(const Eigen::Vector3f& new_position)
```
- Implemetns the moving average filter where it takes the weight distributio: 80% previous poisiotn retaind and 20% of new position incorporated.
---
### Class
#### `ObjectLocator`
##### Parameter's
- `max_pointclouds`: The maximum number of point clouds to keep in queue
- `leaf_size`: Size (meters) of voxel grid for downsampling
- `max_update_distance`: Maximum distance (meters) to asscoiate new detections with existing objects
##### Constructors
```
ObjectLocator() : Node("Object_locator")
```
This constructor intializes a node with the name of object_locator and declares and loads the associated parameters with the constructor (See above) its also responsible for setting up the associated subscribers and service server creation.
- Subscribing to the poinctcloud Node (Below)
```
pointcloud_sub_ = this->create_subscription<sensor_msgs::msg::PointCloud2>(
"/pointcloud", 10,
std::bind(&ObjectLocator::pointcloud_callback, this,
std::placeholders::_1));
```
- ... (Below)
```
object_service_ = this->create_service<okmr_msgs::srv::GetObjectsByClass>(
"get_objects_by_class",
std::bind(&ObjectLocator::get_objects_by_class, this,
std::placeholders::_1, std::placeholders::_2));
```
##### Private Methods
```
pointcloud_callback(const sensor_msgs::msg::PointCloud2::SharedPtr msg)
```
The `pointcloud_callback` method is called whenever a new point cloud arrives on the subscribed topic. It converts the ROS message to a point cloud where it will down sample the data, merge the PCL information, and calls `find_objects_in_pointcloud()` to later update the `long_term_objects`
```
get_objects_by_class(
const std::shared_ptr<okmr_msgs::srv::GetObjectsByClass::Request> request,
std::shared_ptr<okmr_msgs::srv::GetObjectsByClass::Response> response);
```
The `get_objects_by_class` is a service handler for managing external queries that ask questions like "What objects of class X do you have?"
- The private method scans `long_term_objects` and converts matching `DetectedObject` entries to ROS meesages.
```
on_parameter_change(const std::vector<rclcpp::Parameter> ¶meters);
```
`on_parameter_change` is responsible for validating and apply the runtime parameter changes.
- It checks the new values that were taken in from the point cloud and updates the parameters.
- Also trims pointcloud_queue if needed
##### Private Communication Handles
```
rclcpp::Subscription<sensor_msgs::msg::PointCloud2>::SharedPtr
pointcloud_sub_;
```
- Holds the subscription to the pointcloud and keeps the callback allive. Is responsible for invoking the pointcloud_callback when the messages arrive.
```
rclcpp::Service<okmr_msgs::srv::GetObjectsByClass>::SharedPtr object_service_;
```
- Holds the service endpoint that clients call to get objects by class
```
rclcpp::node_interfaces::OnSetParametersCallbackHandle::SharedPtr
param_callback_handle_;
```
- Holds the registered parameter-change callback so the node can update parameters at runtime
##### State and Configuration
```
std::vector<DetectedObject> long_term_objects_;
```
- This vector creates a persistent list of the tracked object known by the node (This would be its position + class). Each item in the vector is one DetectedObject which is updated utilizing `update_postiion_estimate()` when matches occur or appened whe new detections happen
```
std::queue<pcl::PointCloud<pcl::PointXYZRGBL>::Ptr> pointcloud_queue_;
```
---
### Main Functions
#### `find_objects_in_pointcoud`
##### Purpose
The purpose of the `find_objects_in_pointcloud` is to take a single Point Cloud Library (PointXYZRGBL: xyz + rgb + label), find object shaped clusters within the library, and determine each clusters dominant class label. Its also responsible for determing the cluster centroids and return those to the Detected Object Instances.
##### High-Level Breakdown
1. Point Cloud Setup and Intial Processing
```
pcl::PointCloud<pcl::PointXYZRGBL>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZRGBL>(pointcloud));
pcl::PointCloud<pcl::PointXYZRGBL>::Ptr cloud_filtered(new pcl::PointCloud<pcl::PointXYZRGBL>);
pcl::PointCloud<pcl::PointXYZRGBL>::Ptr cloud_f(new pcl::PointCloud<pcl::PointXYZRGBL>);
```
This code block creates three point cloud pointers:
- `cloud` responsible for holding input data
- `cloud_filtered` stores the downsampled cloud data
- `cloud_f` keeps the temporary storage of the filtered results
2. Downsampling
```
// Create the filtering object: downsample the dataset using a leaf size of 1cm
pcl::VoxelGrid<pcl::PointXYZRGBL> vg;
vg.setInputCloud(cloud);
vg.setLeafSize(0.01f, 0.01f, 0.01f);
vg.filter(*cloud_filtered);
```
The downsampling proccess uses the VoxelGrid filter to reduce point densit. Each 1cm cube in space within the VoxelFrid becomes a single point. The downsampling also helps reduct the computation time and memory usage.
3. Plane Segmentation Setup
```
// Create the segmentation object for the planar model and set all parameters
pcl::SACSegmentation<pcl::PointXYZRGBL> seg;
...
seg.setOptimizeCoefficients(true);
seg.setModelType(pcl::SACMODEL_PLANE);
seg.setMethodType(pcl::SAC_RANSAC);
seg.setMaxIterations(100);
seg.setDistanceThreshold(0.02); // 2cm threshold
```
The Plane Segmentation setup is responsible for configuring the RANSACE plane finder. This essentially means that it will identify flat surfaces such as floors and walls. It's parameters are:
- 100 Maximum RANSAC iterations
- 2cm point-to-plane distance threshold
-
4. Main Processing Loop
```
while (cloud_filtered->size() > 0.3 * nr_points) {
// Segment the largest planar component from the remaining cloud
seg.setInputCloud(cloud_filtered);
seg.segment(*inliers, *coefficients);
if (inliers->indices.empty()) {
break;
}
```
This loop is meant to process the information and utilzie the downsampled data and the plane segmentaiton that was done above. The loop will continue while 30% or more points remain. As writing this documentation, the loop only runs onece due to a break; that has been added and has an aditional component for extracting planar inliers for the plane segmentation that has been commented out.
4.1 Clustering Setup and Execution
```
// Creating the KdTree object for the search method of the extraction
pcl::search::KdTree<pcl::PointXYZRGBL>::Ptr tree(
new pcl::search::KdTree<pcl::PointXYZRGBL>);
tree->setInputCloud(cloud_filtered);
std::vector<pcl::PointIndices> cluster_indices;
pcl::EuclideanClusterExtraction<pcl::PointXYZRGBL> ec;
ec.setClusterTolerance(0.02); // 2cm
ec.setMinClusterSize(100);
ec.setMaxClusterSize(25000);
ec.setSearchMethod(tree);
ec.setInputCloud(cloud_filtered);
ec.extract(cluster_indices);
```
Our clustering method utilzies a KD-tree to produce an effiecnet nearest-neighbor search algorithm. This KD-tree is meant to increase the speed of the Clustering setup which configures Euclidean Clustering with:
- A cluster tolerance of 2cm
- Parameters to ensure clusters have 100-25000 points
4.2 Cluster Processing
```
for (const auto &cluster : cluster_indices) {
pcl::PointCloud<pcl::PointXYZRGBL>::Ptr cloud_cluster(
new pcl::PointCloud<pcl::PointXYZRGBL>);
std::map<uint32_t, int> label_counts;
Eigen::Vector3f position_sum(0.0f, 0.0f, 0.0f);
for (const auto &idx : cluster.indices) {
const auto &point = (*cloud_filtered)[idx];
cloud_cluster->push_back(point);
// Count label occurrences and sum positions
label_counts[point.label]++;
position_sum += Eigen::Vector3f(point.x, point.y, point.z);
}
```
After the clusters have been properly setup and excuted following our algorithm outlined above. The cluster processing segment ensures that for each cluster found:
- We collect its points into a new cloud
- Count the occurences of each label
- Summ the point positiosn for centroid creation.
4.3 Cluster Analysis and Object Creation
```
// Find the dominant label in the cluster
auto dominant_label =
std::max_element(label_counts.begin(), label_counts.end(),
[](const std::pair<uint32_t, int> &a,
const std::pair<uint32_t, int> &b) {
return a.second < b.second;
});
// Calculate purity and discard impure clusters
float purity = static_cast<float>(dominant_label->second) /
static_cast<float>(cloud_cluster->size());
if (purity < 0.7f) {
continue;
}
// Calculate average position and add to detected objects
Eigen::Vector3f avg_position = position_sum /
static_cast<float>(cloud_cluster->size());
detected_objects.emplace_back(avg_position, dominant_label->first);
```
Once all the information has been properly processed, the cluster analysis segment determines the most common label in a given cluster. It also caluclates label purity (Which is the percentage of points within a dominant label). If it finds that the purity > 70%, it will compute a centroid and creates a DetectedObject with position and class and adds it to the result vector mentioned earlier in the documentation.
---
#### `pointcloud_callback`
##### Purpose
The `pointcloud_callback` method receives new point cloud data from the `/pointcloud` ROS topic. It maintains a buffer of multiple point clouds, performs voxel-grid downsampling, merges them into a combined cloud, and then executes object detection. Finally, it updates the long-term object tracking structure using proximity based matching and smoothing.
##### High-Level Breakdown
1. Convert ROS PointCloud2 to PCL Format
```
// Convert ROS message to PCL point cloud
pcl::PointCloud<pcl::PointXYZRGBL>::Ptr cloud(
new pcl::PointCloud<pcl::PointXYZRGBL>);
pcl::fromROSMsg(*msg, *cloud);
```
The conversion of the ROS message to a PCL is neccesary for the further process's in `pointcloud_callback` as all processing is done in with the assumptiion that the Data is in a PCL library.
3. Donsample the Incoming Point Cloud
```
// Downsample using VoxelGrid filter
pcl::PointCloud<pcl::PointXYZRGBL>::Ptr cloud_downsampled(
new pcl::PointCloud<pcl::PointXYZRGBL>);
pcl::VoxelGrid<pcl::PointXYZRGBL> vg;
vg.setInputCloud(cloud);
vg.setLeafSize(leaf_size_, leaf_size_, leaf_size_);
vg.filter(*cloud_downsampled);
```
Similar as described above in `find_object_in_pointcloud` we reduce our PCL by utilizing a VocelGrid to reduce the density of the point cloud by aggregating clusterd or nearby points into a single voxel. This is mainly done to reduce the computational cost while maintain are spatial structure.
5. Maintatin a Queue of Recent Point Cloud
```
// Append the downsampled pointcloud to the queue
pointcloud_queue_.push(cloud_downsampled);
// If the size of the queue exceeds the max pointcloud parameter, pop the queue
while (static_cast<int>(pointcloud_queue_.size()) > max_pointclouds_) {
pointcloud_queue_.pop();
}
```
Similarly to what was mentioned above, the now down sampled PCL is appeneded to the queue of recent nodes to be proecessed into objects (If viable). However if the size of the new queue exceede the max pointclour paremeter, we remove this downsampled PCL from the que instead as we can't configure it as an object based of our algorithm.
7. Concatenate All Clouds in the Queue
```
// Concatenate all pointclouds into one combined pointcloud
pcl::PointCloud<pcl::PointXYZRGBL>::Ptr combined_cloud(
new pcl::PointCloud<pcl::PointXYZRGBL>);
combined_cloud->header = cloud_downsampled->header;
std::queue<pcl::PointCloud<pcl::PointXYZRGBL>::Ptr> temp_queue = pointcloud_queue_;
while (!temp_queue.empty()) {
*combined_cloud += *(temp_queue.front());
temp_queue.pop();
}
```
In this component all were simply doing is concatenating all clouds in the current cue into one combined_cloud. It is important to note hwoeve that the use of the temporary que ensures that the original buffer remains intact.
9. Downsample of the Combined Cloud
```
// Downsample the combined pointcloud
pcl::PointCloud<pcl::PointXYZRGBL>::Ptr final_cloud(
new pcl::PointCloud<pcl::PointXYZRGBL>);
pcl::VoxelGrid<pcl::PointXYZRGBL> vg_combined;
vg_combined.setInputCloud(combined_cloud);
vg_combined.setLeafSize(leaf_size_, leaf_size_, leaf_size_);
vg_combined.filter(*final_cloud);
```
Similar to what was done earlier in the intial PCL, we then downsample oru combined point cloud to control our density futher and produce the final point cloud
12. Long Term Object Tracking and Smoothing
```
// Update long-term object tracking using the class member for max_update_distance
// For each newly detected object
for (const auto& detected_obj : detected_objects) {
bool updated_existing = false;
// Try to update existing objects
for (auto& existing_obj : long_term_objects_) {
// If same class and close enough, update position
if (detected_obj.object_class == existing_obj.object_class) {
float distance = (detected_obj.position - existing_obj.position).norm();
if (distance < max_update_distance_) { // Using class member instead of hardcoded value
existing_obj.update_position_estimate(detected_obj.position);
updated_existing = true;
break;
}
}
```
In the code segment provided above, the tracking stratgey is to match each newley detected object by a class and its spatial proximity. In adition to this, it also trys to smooth usin an exponetial weighted average.
---
#### `get_object_by_class`
##### Purpose
`get_objects_by_class` is a service callback that returns all currently tracked objects whose class matches the requested class ID
##### Quick Breakdown
1. Loop over the node's stored tracked objects (`long_term_objects_`).
2. For each tracked object, check if `obj.object_class == request->object_class`.
4. If it matches, convert the internal `DetectedObject` to the ROS message using `obj.to_ros_msg()`` and push it into `response->objects`.
4. After the loop, log an informational message with how many objects were returned and the requested class id.
---
#### `on_parameter_change`
##### Purpose
The purpose of the `on_parameter_change` method is to handle runtime parameter updates and apply them safely to the node's configuration without damaging the data.
##### Parameters
- `max_pointclouds (int)`: number of recent pointclouds to keep. If decreased, the code pops oldest entries from pointcloud_queue_ until the queue fits the new limit.
- `leaf_size (double)`: voxel grid leaf size used for downsampling. Applied immediately to leaf_size_.
- `max_update_distance (double)`: distance threshold used when matching detections to tracked objects. Applied immediately.