# Fantastic Sensors and How to Use Them Workshop
## Introduction
Let's start brainstorming with some mobile robot use cases:
- Ex: Roomba
- Ex: Boston Dynamics Spot
- Ex: Nurse Assistant Robot
## Meet the Sensors
For each sensor:
- How does it work?
- How is it used?
- Positives & Things to Watch out For
- How to process it
### LiDAR
#### How does it work?

Outputs a Point Cloud

#### How is it used?
- Can be used to create an [**occupancy grid**](https://en.wikipedia.org/wiki/Occupancy_grid_mapping) to map floors of a building for a robot to navigate through

- Can be used to determine the speed of an object
- Can be used for generating [ground truth data](https://datascience.stackexchange.com/questions/17839/what-is-ground-truth))
- [ScaleAI](https://scale.com/self-driving-cars)
- Can be used for SLAM
#### Positives & Limitations
:+1:
- Fast and accurate, $\pm$ 2cm of accuracy
- Reliable
- Wide range compared to cameras or radar
:-1:
- Generates lots of data really fast, can add lots of overhead to process all of it
- Reflectivity limitations
- Does not work well in situations in which there are high sun angles or huge reflections from another light source
- Doesn't work well on nonreflective surfaces like water
- Not great at providing context about a scene to a human
#### How to process it?
Methods
- Outlier removal
- Filtering ([Noise filter with ELM](https://pdal.io/stages/filters.elm.html#filters-elm), [Outlier Detection](https://pdal.io/stages/filters.outlier.html#filters-outlier))
- Reprojection
- Clustering
- Converting to an occupancy grid (Nearest Neighbor)
- Creating a mesh
Libraries
- [PDAL](https://pdal.io/stages/filters.elm.html#filters-elm): Common library to process point cloud data
### Intel Realsense D435 Depth Camera
#### How does it work?
Stereo Vision:

Outputs RGBD Image:

#### How is it used?
- 3D image segmentation

-
#### Positives & Things to Watch out For
:+1:
- Has an image mapped to depth, making it easier to understand the context of its output
:-1:
- Doesn't work well with reflective objects
- Can get occluded (camera view is blocked)
- Dependent on lighting conditions
- Not as reliable as LiDAR
#### How to process it?
- Similar methods as LiDAR to process the point cloud
- Image segmentation using object detection to find what you want in the image, then return depth for the pose of it
Libraries:
- [YOLO](https://pjreddie.com/darknet/yolo/) (for object detection)
- [Mask-R-CNN](https://viso.ai/deep-learning/mask-r-cnn/) (for image segmentation)
- OpenCV (for general image processing)
### Intel Realsense T265 Localization Camera
#### How does it work?

- Sensor fusion between stereo vision and IMU
- Outputs a 3D pose (x,y,z,roll,pitch,yaw), takes care of robot localization (i.e where is the robot)
#### How is it used?
- Mainly for robust robot localization
- Enables following a path and navigation through known/unknown environments
#### Positives & Things to Watch out For
- Similar positives and negatives as the D435 stereo vision camera
:+1:
- Robust
- Gives you odometry out of the box
:-1:
- There can be drift that is introduced for longer periods of time
- Rapid shaking can potentially throw off IMU and stereo cameras
#### How to process it?
- Not much processing needed, outputs what you need right out of the box
- Can use sensor fusion with other odometry sensors for even more robust localization
## Collecting data for your robot
:::info
Within your team, choose a mobile robot you want to:
1. Determine what sensors to use
2. Collect data for
3. Brainstorm methods of processing the data
:::
1. Run sensors
Run T265
```
roslaunch realsense2_camera demo_t265.launch
```
Run D435
```
roslaunch realsense2_camera demo_pointcloud.launch
```
Run LiDAR
```
roslaunch sensors lidar.launch
```
2. In another terminal window, collect a ROS bag of your data
```
rosbag record -a
```
## Visualizing your data in Webviz
1. Upload your rosbag data to [this OneDrive folder](https://purdue0-my.sharepoint.com/:f:/g/personal/ruppulur_purdue_edu/Ej1RRXFjHZpCo5IsHsAOOgcBkUO6QQtVveEq1Zy6Qoy1Yg?e=o8nXmH) into a **new** folder with your team name.
2. Download it on your own computer and drag and drop the .bag file into [Webviz](https://webviz.io/app/)
### Sources
- https://www.profolus.com/topics/advantages-and-disadvantages-of-lidar/
- https://link.springer.com/chapter/10.1007/978-3-319-41501-7_53
- https://towardsdatascience.com/point-clouds-in-the-cloud-b7266da4ff25
- https://www.intelrealsense.com/beginners-guide-to-depth/
- https://www.youtube.com/watch?v=xB9tfi_Bzs8
- https://viso.ai/deep-learning/mask-r-cnn/
- https://www.intelrealsense.com/tracking-camera-t265/
- https://towardsdatascience.com/point-clouds-in-the-cloud-b7266da4ff25