# HCC Final Project :robot_face:
finished at 2020/9/8 :fireworks:
:::info
**Video quality will be squeezed on the GoogleDrive**
:::
[TOC]
## Team Members
| 學號 | 名字 |
| ----------- | ------- |
| 105030020 | 劉紫晏 |
| 0511098 | 葉姵彣 |
| 0611068 | 李啟安 |
==This project is credited with Zlewe== :+1: :+1: :+1:
## Project Goal
- **Simulate SubT competition**
(find the position of robot and object)
- **Grab the object we detected**
## Abstract
- Use Apriltag to find position during moving
- Use Yolo to recognize the object
- Use the Robotic arm to get object
## ROS Structure
### Node Graph

The **"play"** on the left hand side of the picture is the rosbag we recorded preciously. When we played the rosbag, the **"play"** node would publish **/camera/color/image_raw**, and the nodes **"sync"**, **"apriltag_detector"**, "**darknet_ros"** would subscribe the topic.
- Subscriber **"sync"** would read the raw image and publish the information of camera.
- **"apriltag_detector"** would subscribe the raw image and camera information to identify the serial number of the apriltag showen in the raw image.
- **"apriltag_tf"** read the information of every apriltag and the message from **"apriltag_dector"** to get local transformation between Locobot and particular apriltag.
- **"darknet_ros"** get the raw image and run YOLO to determine the object and show the object position with bounding boxes.
- **"tf boardcastor"** in the middle of the graph is the definations written in the launch file.
### Topic & Node Graph

## Localization
To achieve the robot localization, we utilize the transformations between Locobot's camera and any apriltag seen by Locobot. By doing that, we can know the absolute position of Locobot if the apriltags' information are finely defined and launched.
To be more precisely, the tools we have applied in our project are shown as below:
1. **14 different apriltags spread in the environment**
2. **Launch files that include apriltags distribution and information**
3. **Locobot with D435 camera**
5. Odometry of Locobot
6. Algorithm that can identify apriltags and callback the particular serial numbers
7. Algorithm that can output the robot's position by inputting the D435 scene.
With the launch files, D435 camera, and algorithm mentioned in point **2**, **3**, and **6**, Locobot can recognize which apriltag is shown in its view. Like the picture got from rviz, Locobot know the apriltag it is viewing is tag_12.

Once Locobot know the serial number of apriltags it is viewing, it can get the relative space vector with the help from D435. We can define the particular apriltag as local origin and the space vector as local coordinate based on the apriltag.

**Troubles** may occur when Locobot see more than one apriltag at the same time.

**The space vectors D435 gets may exist some deviation**, which would derive more than one global positions of Locobot. To avoid the trouble, **we decide to pick the closest apriltag as the base of the local frame**, because accuracy of space vector would increase when the distance is shorter.
After we get the spacial relationship-- to say more clearly--**transformation between Locobot and specific apriltag**, the next step to do is to transform the local frame to the global world. What we need now is the information of the apriltags' distribution, mentioned in **point 2**. Because we know all the positions of apriltags, we can get the transformation between each tag and origin with some simple calculation. Then, by **multiplying two transformations we would get the global transformation between Locobot and the origin**. That is, we get the absolute position of Locobot in world frame.
There still exists a problem. Locobot can't know where it is if no apriltag is caught by D435, because it doesn't have local transformation generated based on any apriltag. To solve the problem, we introduce the tool -- **odometry** into your project.
With the odometry recorded, Locobot can localize itself with the transformation provided by odometry frame("map frame" on Locobot), and update the deviation when it sees an apriltag.
(the complete code is mentioned in **point 6**)
## Object Detection

- Due to the calculation ability of our laptops, we can only use **yolov2_tiny** for detection
- During the detection process, **the FPS is only about 0.6** which is really unacceptable
- After launching the script, we can found that the **bounding box** was not accurate and correct
:::success
#### In this [Yolo_video](https://drive.google.com/file/d/1oF5xB5d3iCcUXsvYuXrCkTHp3k2H8gry/view?usp=sharing), you can find the low FPS and bad detection performance in our laptops
:::
## Robotic Arm Manipulation

- Our goal is to use **Locobot arm** to get the items we want
- After trying [ee_xyz_control.py](https://github.com/Interbotix/pyrobot/blob/master/examples/locobot/manipulation/ee_xyz_control.py) and [pushing.py](https://github.com/Interbotix/pyrobot/blob/master/examples/locobot/manipulation/pushing.py/), we found that [grasping](https://github.com/facebookresearch/pyrobot/tree/master/examples/grasping) provided accurate grasp.
- The grasping script included opencv to detect the object、camera to compute the pose of the object and then uses the gripper part of the arm to grasp it.
::: success
#### This is our [demo video](https://drive.google.com/file/d/1c-LaQz_ZsgVYkNeAYVWfg8xS8AEdqPhn/view?usp=sharing) catching the duckietown duck :duck:
:::
:::danger
:fire:
While we launch our locobot , there was also some problems happened occasionally. We list them below
- locobot's motors broke. It would reply us by ID of motors. Restart it by pulling the cable of motor maybe help.
- ssh to TX2 sometimes get into trouble. It's properly because of the bad router. The one way to fix it is changing IP of the laptop.
- Before testing pushing.py, we also tested [realtime_point_cloud.py](https://github.com/Interbotix/pyrobot/blob/master/examples/locobot/manipulation/realtime_point_cloud.py) to determine the best value of the "floor height". You can find the code on the facebook github. The problem is the camera connect to the locobot, so if you test you should ssh to locobot, after roscore in TX2.
:::
## Final Result :rocket:
1. We successfully completed the localization task in the rosbag file
2. Object detection is not good enough to combine with the system
3. We are able to catch anything in our sight if it is obvious enough for opencv to recognize
4. By changing the frame, we can see our status in different reference

### Demo videos:
:::success
- [**Final_Demo with no odometry**](https://drive.google.com/file/d/1lktWOe1w11LWGAJb4x5pNw_PhhpXTlQk/view?usp=sharing)
- [**Final_demo with odometry**](https://drive.google.com/file/d/1s3tvH_0TX-V4aX_rkcAz9udQnrOBpOGV/view?usp=sharing)
- [**Final_demo in cam_mount frame**](https://drive.google.com/file/d/1xhBmo_t-FEaOB5rfNN2p9wZXqKy92wCV/view?usp=sharing)
- [**Odemetry in baselink frame**](https://drive.google.com/file/d/1KZxI8Pfhgr0G-NCOtD0_MmPZjZQDs1JB/view?usp=sharing)
:::
---
:::danger
:fire:
**Trouble** occured when we use the bag to experiment our files because ROS using the time same with Ubuntu we use, which we call wall clock.
- But the bag is the time we recorded the video. We fix it using the code:
rosparam set use_sim_time true
or we can use rosbag play --clock to publish the clock time
If you get the error " [rospack] Error: package ' *package_name*' not found ", check if you catkin_make and source ~/.bashre, or just replace *rosrun* with *python*.
:::