# Script <h3 style="color:blue">EKF</h3> - For the first application in localisation unit we have used 3 packages: 1. robot localization 2. nmea_navsat_driver 3. lu The first two packages are useful API from internet, the third one is developed by us. - Our three most important nodes for this app are `ekf_localizaition_node`,` navsat_transform_node` and `nmea_to_navsatfix node`. The ekf node is the implementation of extended kalmen filter which has two main functions: predict and correct. The navsat_transform node transforms geographic coordinates (latitude and longitude) into the robot's world frame (typically map or odom), which can simplify the fusion of GPS data ==Need some improvements here== The nmea_to_navsatfix converts nmea sentences to GPS fix data - Now we are going to have a look at the flow chart: 1. we start with the `/websocket_bridge` node: This node is basically the simulator node. 2. In our project we use IMU sensor and pass the IMU data to `ekf_node` 3. The `nmea_to_navsatfix` node will convert the nmea sentence to GPS fix data, and the data will be passed to `/navsat_transform` node. 4. The `/navsat_transform` node will transform the GPS data to the world frame of the robot/vehicle and publish this message to topic `/odometry/gps`. 5. The `ekf` node subscribes to this topic and receive GPS data from it. 6. With IMU data the `ekf` node will do some computation here and publish the resulted filtered odometry to topic `/odometry/filtered_odom`, which is also our final output ==Need some improvements here== - The next slide is our launch file: 1. The first two nodes are for static transformations from sensor frames to the base frame 2. Then we have the node for converting nmea sentences to GPS fix data 3. Then we load the parameters for ekf node and navsat_transform node 4. Finally we have ekf node and navsat_transform node <h3 style="color:blue">rf2o</h3> > RF2O is a fast and precise method to estimate the planar motion of a lidar from consecutive range scans. For every scanned point we formulate the range flow constraint equation in terms of the sensor velocity, and minimize a robust function of the resulting geometric constraints to obtain the motion estimate. Conversely to traditional approaches, this method does not search for correspondences but performs dense scan alignment based on the scan gradients, in the fashion of dense 3D visual odometry. The minimization problem is solved in a coarse-to-fine scheme to cope with large displacements, and a smooth filter based on the covariance of the estimate is employed to handle uncertainty in unconstraint scenarios (e.g. corridors). > The rf2o_laser_odometry node publishes planar odometry estimations for a mobile robot from scan lasers of an onboard 2D lidar. It initially estimates the odometry of the lidar device, and then calculates the robot base odometry by using tf transforms. <h3 style="color:blue">oject classification</h3> <h3 style="color:blue">object detection</h3> - Firstly I will go through the flow chart quickly: 1. first we have input from topic `point_raw` 2. the node `voxel_filter_z` is subscribing to this topic and it will downsample the points to smaller size and remove the ground, and publish the results to topic `/voxel_filter_z/output` 3. This msg is then received by node `lidar_euclidean_cluster_detect` which group the points into cluster: each cluster is an object. And publishes the detected objects to topic `/detection/lidar_detector/objects`. 4. This msg is then received by node `/lidar_shape_estimation`, which will do shape estimation for each object. 5. The output msg is finally published to topic `Idetection/shape_estimation/objects_markers`, which is subscribed by our node which will derive velocity and dimension (size) of the object from the msg. ==Need some improvements here== - Next I will go deep in the nodes <h3 style="color:blue">resource planner</h3>