# Spot Delivery
| **Name** | **Student Number** |
|-----------------------|--------------------|
| Joris Weeda | 5641551 |
| Nicolas Wim Landuyt | 4863917 |
| Badr Essabri | 5099412 |
| Mohamed Zaaj | 4664000 |
| Alex Ledbetter | 5696860 |
## General project information
| **Group** | **Company** | **Course** |
|-----------|-------------|------------|
| 16 | TNO | MDP |
## Table of Contents
- [Problem Definition](#problem-definition)
- [Project Description](#project-description)
- [Project Structure](#project-structure)
- [Installation and Build for Spot Delivery, CHAMP, CHAMP_SPOT Simulation](#installation-and-build-for-spot-delivery-champ-champ_spot-simulation)
- [Launching the SPOT Simulation, SPOT Robot, and CHAMP Teleop](#launching-the-spot-simulation-spot-robot-and-champ-teleop)
- [Packages](#packages)
- [Spot delivery](#spot-delivery)
- [Spot Delivery Audio Interaction](#spot-delivery-audio-interaction)
- [Spot Delivery Behavior](#spot-delivery-behavior)
- [Spot Delivery Navigation](#spot-delivery-navigation)
- [Spot Delivery Perception Detection](#spot-delivery-perception-detection)
- [Spot Delivery Perception Slam](#spot-delivery-perception-slam)
- [Sources](#sources)
- [License](#license)
## Project overview
### Problem Definition
The objective of this project is to enhance the capabilities of SPOT (Smart Parcel Optimization Technology) in recognizing and retrieving specified objects, as well as localizing individuals within a predetermined indoor space. The project aims to employ computer vision techniques to achieve these objectives. Furthermore, SPOT must be able to navigate the designated area to locate the intended recipient and deliver the item while ensuring the safety of other individuals present in the room. The system should utilize human-robot interaction to interact with recipients, verify task completion, and assist in localizing specific persons.
### Project Description
The SPOT Delivery System is a robotic solution designed to automate the process of recognizing objects, locating individuals, and delivering items within an indoor environment. The system utilizes computer vision techniques to augment SPOT's capabilities in object recognition and localization. Additionally, it employs human-robot interaction to facilitate communication with recipients and ensure the successful completion of delivery tasks.
The project is structured into several packages, each serving a specific purpose within the system. These packages include:
- **spot_delivery**
Contains the main configuration files and launch files for the SPOT Delivery System.
- **spot_delivery_audio_interaction**
Implements the audio interaction functionality, allowing SPOT to communicate with recipients.
- **spot_delivery_behavior**
Defines the behavior state machines for executing delivery tasks, including movement, perception, and task execution.
- **spot_delivery_navigation**
Handles the navigation functionality of SPOT, enabling it to move within the designated area.
- **spot_delivery_perception_detection**
Implements object detection capabilities for SPOT, enabling it to recognize specified objects.
- **spot_delivery_perception_slam**
Deals with the Simultaneous Localization and Mapping (SLAM) functionality, allowing SPOT to create a map of the environment and navigate accordingly.
The project provides a comprehensive solution for automating the delivery process, integrating various modules to achieve efficient and safe item delivery within indoor spaces.
### Project Structure
The project follows a modular architecture with the following tree structure:
<details>
<summary>Click to show file structure of packages</summary>
```
├── spot_delivery
│ ├── CMakeLists.txt
│ ├── launch
│ │ └── spot_delivery_all_nodes.launch
│ └── package.xml
├── spot_delivery_audio_interaction
│ ├── action
│ │ └── hri.action
│ ├── bin
│ │ ├── spot_delivery_audio_interaction_client
│ │ └── spot_delivery_audio_interaction_server
│ ├── CMakeLists.txt
│ ├── package.xml
│ ├── setup.py
│ └── src
│ ├── audio_interaction_client
│ │ ├── __init__.py
│ │ ├── __pycache__
│ │ │ └── spot_delivery_audio_interaction_client.cpython-38.pyc
│ │ └── spot_delivery_audio_interaction_client.py
│ └── audio_interaction_server
│ ├── __init__.py
│ ├── __pycache__
│ │ └── spot_delivery_audio_interaction_server.cpython-38.pyc
│ └── spot_delivery_audio_interaction_server.py
├── spot_delivery_behavior
│ ├── action
│ │ ├── communication.action
│ │ ├── move.action
│ │ ├── navigation.action
│ │ └── perception.action
│ ├── bin
│ │ ├── data_entryA
│ │ ├── data_entryB
│ │ ├── main_state_machine
│ │ └── spot_srv_test
│ ├── CMakeLists.txt
│ ├── executive_smach_tutorials
│ │ ├── examples
│ │ │ ├── actionlib2_test.py
│ │ │ ├── actionlib_test.py
│ │ │ ├── concurrence2.py
│ │ │ ├── concurrence.py
│ │ │ ├── iterator_tutorial.py
│ │ │ ├── sequence.py
│ │ │ ├── state_machine_nesting2.py
│ │ │ ├── state_machine_nesting.py
│ │ │ ├── state_machine.py
│ │ │ ├── state_machine_simple_introspection.py
│ │ │ ├── state_machine_simple.py
│ │ │ ├── user_data2.py
│ │ │ └── user_data.py
│ │ └── scripts
│ │ └── usecase_01
│ │ ├── executive_step_01.py
│ │ ├── executive_step_02.py
│ │ ├── executive_step_03.py
│ │ ├── executive_step_04.py
│ │ ├── executive_step_05.py
│ │ ├── executive_step_06.py
│ │ ├── executive_step_07.py
│ │ └── turtle_nodes.launch
│ ├── help
│ ├── notes.txt
│ ├── package.xml
│ ├── scripts
│ │ ├── data_entryA.py
│ │ ├── data_entryB.py
│ │ ├── __init__.py
│ │ └── spot_srv_test.py
│ ├── setup.py
│ └── src
│ ├── behavior_execute_tasks
│ │ ├── execute_tasks_state_machine.py
│ │ └── __init__.py
│ ├── behavior_main
│ │ ├── __init__.py
│ │ ├── main_state_machine.py
│ │ └── utils.py
│ ├── behavior_moving
│ │ ├── __init__.py
│ │ └── moving_state_machine.py
│ ├── behavior_perception
│ │ ├── __init__.py
│ │ └── perception_state_machine.py
│ └── spot_delivery_behavior_node.cpp
├── spot_delivery_navigation
│ ├── action
│ │ ├── DoDishes.action
│ │ └── move.action
│ ├── bin
│ │ └── spot_delivery_navigation_server
│ ├── CMakeLists.txt
│ ├── package.xml
│ ├── setup.py
│ └── src
│ ├── navigation_server
│ │ ├── __init__.py
│ │ └── spot_delivery_navigation_server.py
│ └── spot_delivery_navigation_node.cpp
├── spot_delivery_perception_detection
│ ├── CMakeLists.txt
│ ├── package.xml
│ └── src
│ └── spot_delivery_perception_detection_node.cpp
└── spot_delivery_perception_slam
├── CMakeLists.txt
├── package.xml
└── src
└── spot_delivery_perception_slam_node.cpp
32 directories, 78 files
```
</details>
---
## Installation and Build for Spot Delivery, CHAMP, CHAMP_SPOT Simulation
Execute the following commands to install and build the repository in your home directory. Note that the name "*spot_delivery_ws*" and its parent directory can be modified as desired. Just be sure to execute the commands in the corresponding directories. You need a GitLab SSH key set up.
```
mkdir -p ~/spot_delivery_ws/src
cd ~/spot_delivery_ws/src
git clone git@gitlab.tudelft.nl:cor/ro47007/2023/team-110/champ_spot.git
git clone --recursive https://github.com/chvmp/champ
git clone https://github.com/chvmp/champ_teleop
git clone https://gitlab.tudelft.nl/cor/ro47007/2023/team-110/champ_spot
cd ~/spot_delivery_ws/src/champ_spot
git submodule init
git submodule update
cd ~/spot_delivery_ws/
rosdep install --from-paths src --ignore-src -r -y
catkin build
source ~/spot_delivery_ws/devel/setup.bash
```
### Gazebo Worlds
Execute the following lines to add the repository `models` directory to your GAZEBO_MODEL_PATH environment variable (adds the quoted line to the end of your ~/.bashrc file):
```
echo 'export GAZEBO_MODEL_PATH=$GAZEBO_MODEL_PATH:~/spot_delivery_ws/src/champ_spot/models' >> ~/.bashrc
source ~/.bashrc
```
### (Optional) Add ROS Source Commands to your ~/.bashrc
Execute the following lines to add ROS "source" commands to the end of your ~/.bashrc file:
```
echo 'source /opt/ros/noetic/setup.bash' >> ~/.bashrc
echo 'source ~/spot_delivery_ws/devel/setup.bash' >> ~/.bashrc
source ~/.bashrc
```
---
## Launching the SPOT Simulation, SPOT Robot, and CHAMP Teleop
Execute the following commands to launch the SPOT Simulation Nodes.
(Can skip the 'source' commands if you executed "Add ROS Source Commands to your ~/.bashrc" commands above)
In TERMINAL #1 - Spawn World
```
source /opt/ros/noetic/setup.bash
source ~/spot_delivery_ws/src/devel/setup.bash
roslaunch spot_config spawn_world.launch
```
In TERMINAL #2 - Spawn SPOT Robot
```
source /opt/ros/noetic/setup.bash
source ~/spot_delivery_ws/src/devel/setup.bash
roslaunch spot_config spawn_robot.launch rviz:=true
```
In TERMINAL #3 - Launch Manual SPOT Controller "Teleop"
```
source /opt/ros/noetic/setup.bash
source ~/spot_delivery_ws/src/devel/setup.bash
roslaunch champ_teleop teleop.launch
```
---
## Packages
### Spot delivery
#### Overview
The Spot Delivery package serves as the main package for coordinating and launching all the nodes required for the Spot robot's delivery tasks. It provides the necessary configurations and launch files to ensure seamless operation of the system.
As of June 1, 2023, this highest-level functionality of the project is still in-work. Documentation for this package is for read-only review only. Please see the individual packages below to test isolated functionality until the project is complete.
#### Running the node(s)
*NOT CURRENTLY FUNCTIONAL*
Execute the `spot_delivery.launch` file provided in the Spot Delivery package. This launch file will start all the necessary nodes with the appropriate configurations:
`roslaunch spot_delivery spot_delivery.launch`
#### Expected results
Upon successful execution, the following expected results can be observed:
* Node Initialization
Each node in the system should initialize without any errors and provide the necessary feedback upon startup.
* Node Availability
Verify that all the required nodes are running by checking their corresponding ROS topics and services.
* Communication and Coordination
The nodes should be able to communicate and coordinate with each other effectively to carry out the required delivery tasks.
* Task Execution
The Spot robot should be able to perform the specified delivery tasks, such as navigating to waypoints, avoiding obstacles, picking up or dropping off items, etc., based on the provided instructions and commands.
By launching the `spot_delivery.launch` file and observing the expected results, you can ensure that all the necessary nodes are initialized and running correctly, enabling the Spot Delivery package to perform its intended functionality.
### Spot Delivery Audio Interaction
#### Overview
The `spot_delivery_audio_interaction` package provides an ROS action server and client for audio interaction in the context of spot delivery. The server, `spot_delivery_audio_interaction_server.py`, handles the audio interaction functionality, while the client, `spot_delivery_audio_interaction_client.py`, allows users to engage with the server and initiate the audio interaction.
#### Running the Package
To run the nodes in the `spot_delivery_audio_interaction` package there are multiple methods. Run the ROS action server with the following command:
`rosrun spot_delivery_audio_interaction spot_delivery_audio_interaction_server.py`
To engage with the action server and initiate the audio interaction, run the ROS action client with the following command:
`rosrun spot_delivery_audio_interaction spot_delivery_audio_interaction_client.py`
Make sure to have all the dependencies installed before running the package.
#### Expected Results
The `spot_delivery_audio_interaction` package enables audio interaction for spot delivery. It prompts the user with questions, processes their responses, and gathers information about the desired object and the person's name. The server provides functionality such as asking the user to confirm their name, asking about the desired object (apple, ball or screwdriver), and asking if the user wants to retry the sequence to make sure the user has been surveyed correctly. The client allows users to engage with the server and participate in the audio interaction.
### Spot Delivery Behavior
#### Overview
The intent of the `spot_delivery_behavior` package is to provide a high-level state machine that coordinates server actions, transitions between different modes of operation, and maintains a simple knowledgebase including but not limitied to a task list for deliveries, and a list of known people and objects.
The Behavior-Node State Machine consists of states corresponding to the activity diagrams in our report. This includes a perception state for object/person detection, a planning/navigation state to determine task order and executemovement to task waypoints, and a communication state to perform audio interaction with the persons in the environment. Each state in the state machine executes action server requests to direct activity and receive knowledge. State transitions depend on this knowledge. The below figure shows the complete state machine configuration:

#### Launching Behavior-Node State Machine
Execute the following commands to launch the Behavior Node State Machine
In a new terminal:
```
source /opt/ros/noetic/setup.bash
source ~/spot_delivery_ws/devel/setup.bash
roslaunch spot_delivery_behavior spot_delivery_behavior.launch
```
Additionally a visulaization for the complete state machine can be launched by executing the following in a new terminal after the state machine has been launched:
```
source /opt/ros/noetic/setup.bash
source ~/spot_delivery_ws/devel/setup.bash
rosrun smach_viewer smach_viewer.py
```
#### Expected results
In the package's current state, children states are waiting to be executed with action server requests to control SPOT to perform its tasks but the project is not complete. As such, the state machine gets "stuck" in the perception state until these servers are built. Nonetheless, upon launching the above launch file, the below output is the expected result.
```
process[rosout-1]: started with pid [25267]
started core service [/rosout]
process[spot_delivery_behavior_main-2]: started with pid [25270]
[INFO] [1685650519.522214]: State machine starting in initial state 'STARTUP' with userdata:
['KB']
[INFO] [1685650519.522347]: Thread for main state machine started
[INFO] [1685650519.522945]: Executing STARTUP_STATE
[INFO] [1685650521.528433]: State machine transitioning 'STARTUP':'startup_complete'-->'NESTED_PERCEPTION_STATE_MACHINE'
[INFO] [1685650521.531380]: State machine starting in initial state 'SCAN_STATE' with userdata:
['KB']
[INFO] [1685650521.533445]: Executing SCAN_STATE
[INFO] [1685650522.036135]: Sleeping in PENDING SCAN_STATE
[INFO] [1685650522.539617]: Sleeping in PENDING SCAN_STATE
```
### Spot Delivery Navigation
#### Overview
The`spot_delivery_navigation` package is responsible for path planning towards a single waypoint in an occupancy grid. It provides functionality to compute and execute a path from the current robot pose to the desired waypoint.
#### Running the node(s)
Launch the necessary nodes and configurations using the designated launch file:
`roslaunch spot_delivery_navigation spot_delivery_navigation.launch`
Make sure to provide the required inputs, such as the map data, robot pose, and waypoint information, through appropriate ROS topics or services.
The navigation node will receive the waypoint information and perform path planning to generate a trajectory. The robot will execute the planned trajectory, moving towards the desired waypoint.
#### Expected results
The Spot Delivery Navigation package is designed to generate a path towards a goal waypoint in the occupancy grid while avoiding obstacles. The expected results of running the package are as follows:
1. Path Planning
The package should successfully generate a collision-free path from the current robot position to the goal waypoint.
2. Obstacle Avoidance
During navigation, if the package encounters obstacles along the planned path, it should dynamically adjust the path to avoid collision with the obstacles.
3. Collision Detection
In cases where the package still discovers an obstacle despite path adjustments, it should perform collision avoidance maneuvers and come to a stop to prevent any collision.
4. Goal Achievement
Once the robot reaches the goal waypoint, it should stop and indicate successful completion of the navigation task.
### Spot Delivery Perception Detection
The spot_delivery_perception_detect package is designed to detect persons and objects using the state-of-the-art [groundingDINO model](https://github.com/IDEA-Research/GroundingDINO).
#### Overview
This package takes grayscale and depth images as input, which are received through topics from the robot. The grayscale images are sent to the object detection model, which utilizes a powerful text encoder enabling it to detect a wide range of objects. The model produces bounding box predictions, which are then combined with the depth images to obtain accurate object coordinates.
##### Functionality
1. Input: Grayscale and depth images received via topics from the robot.
2. Object Detection: The grayscale images are processed using the groundingDINO model to detect various objects.
3. Bounding Box Prediction: The model outputs bounding box coordinates for detected objects.
4. Coordinate Extraction: The bounding box coordinates are processed alongside the depth images to obtain precise object coordinates.
#### Launching Perception Detection
For this package groundingdino model needs to be pip installed. This can be done in the terminal by executing the following commands:
```
cd ~/spot_delivery_ws/src/spot_delivery_perception_detect
pip install GroundingDINO-0.1.0-alpha2.tar.gz
cd ./src/spot_delivery_perception_detect_server/weights
```
Afterwards download:
https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth
And add it this folder:
```
cd ~/spot_delivery_ws/src/spot_delivery_perception_detect/src/spot_delivery_perception_detect_server/weights
```
Execute the following commands to launch the object detector package
In a new terminal:
```
source /opt/ros/noetic/setup.bash
source ~/spot_delivery_ws/devel/setup.bash
rosrun spot_delivery_perception_detect myscript
```
#### Expected results
In the current state of the package, the ROS node successfully detects objects and persons. However, there is an issue with the conversion to the global frame that needs to be addressed. When you launch the node, you will observe a dictionary of detected objects containing their names and coordinates in an intermediate frame, rather than the global frame.
### Spot Delivery Perception Slam
#### Overview
The`spot_delivery_perception_slam` package is responsible for creating an occupancy grid map that gets updated as the SPOT robot moves through the environment. To help the robot comprehend its surroundings and navigate successfully, this package integrates perception and SLAM techniques. It perceives its surroundings and collects data using depth cameras resulting in a 2D representation of the environment.
#### Running the package
Launching the package can be down by running the following lines in a new terminal:
```
source /opt/ros/noetic/setup.bash
source ~/spot_delivery_ws/devel/setup.bash
roslaunch spot_delivery_perception_slam launch_package.launch
```
#### Expected results
To know it runs succesfully, go to RViz where the SPOT robot is launched.
Open the display, then click on *Add*.
Under the tab called *By display type*, select *Map*, then click on *OK*.
In your display tab, search for the *Map* that you added and expand it clicking on the arrow left to the name *Map*. For topic, select map. If the depencies are all installed correctly, then you are now able to see an occupancy grid map. (Sidenote, this now only works for one camera and does not have the correct outcome yet, but you can still move around using the teleop keys and expand the occupancy grid.)
## Sources
* Website | Boston dynamics SDK : https://dev.bostondynamics.com/
* Website | Boston dynamics ROS : https://github.com/heuristicus/spot_ros
* Website | Boston dynamics ROS docs : https://heuristicus.github.io/spot_ros/html/index.html
* Website | groundingDINO github repository : https://github.com/IDEA-Research/GroundingDINO
---
## License
These ROS packages are licensed under the [MIT License](LICENSE).
---