# 6-DoF Robot Control and Path Planning using Sensor Feedback > The project's primary objective is to combine computer vision with a robotic arm to execute precise pick-and-place operations. This task is carried out utilizing the [Universal Robots' UR5e Robot Arm Manipulator](https://www.universal-robots.com/products/ur5-robot/), which is equipped with a stationary USB webcam and controlled by MATLAB program. Code is available [here](https://github.com/CleaLin/Path_planning_and_computer_vision). ###### tags: `robotics` `computer vision` `path planning` ## Introduction The UR5e robotic arm in Figure 1 is utilized to perform the tasks. ![](https://hackmd.io/_uploads/S1oBGUfW6.jpg =256x171) *Figure 1. the UR5e robotic arm* The tasks involve the integration of computer vision technology with the UR5e robotic arm to enhance the precision and complexity of pick-and-place operations. Utilizing the top-down perspective from the webcam, this project entails several key objectives. Firstly, we aim to identify the game board's location relative to the robot's base joint. Secondly, we will identify and distinguish between the obstacle pieces and the player piece, using a color-coded system to facilitate the identification, all in reference to the robot's base joint. Subsequently, we will execute the relocation of the obstacle pieces from their current positions to their designated locations, guided by a provided occupancy grid. Finally, the Bug2 algorithm will be employed to navigate the player piece from its source location to the specified target location, thus achieving the desired pick-and-place operations. ![](https://hackmd.io/_uploads/Bk2kc8fW6.jpg =320x240) *Figure 2. the webcam view of the game board* The game board in Figure 2 comprises an 8 x 5 grid, with each piece measuring 60 mm x 60 mm, resulting in an overall board size of 380 mm x 590 mm. The pink corners mark the corners of the game board and the purple dots are the ground truth points. The provided ground truth coordinates (the purple dots in the image) align with the global coordinates outlined in Table 1. The robot's base frame is defined as $(x_0, y_0) = (0, 0)$. | Ground Truth Points <br />(the corresponding location in image) | Global Coordinates $(x, y)$ | | ------------ | ------------------ | | Top Left | -250, 75 | | Top Right | -250, -525 | | Bottom Left | -900, 75 | | Bottom Right | -900, -525 | *Table 1. the global coordinates for each ground truth* Three different colors serve specific purposes: green represents the player piece, while red and blue are used to identify obstacles. Table 2 shows the detailed restrictions. | Color of Piece | Purpose | | -------------- | ---------------------------------- | | Red | Obstacles cannot be moved. | | Blue | Obstacles can be lifted and moved. | | Green | Player piece. | *Table 2. Classes of piece and their movement restrictions* ## Computer Vision ### Image Preprocessing Following the image capture from the webcam and subsequent normalisation, the RGB image is converted into an HSV image. It is easier to accomplish color thresholding in the HSV color space, particularly when dealing with varying lighting conditions. ### Find Reference Points To locate the purple reference points within the HSV image, apply [morphological operations](https://www.mathworks.com/help/images/morphological-dilation-and-erosion.html) like erosion and dilation to create a mask that isolates the purple area. Figure 3 displays the resulting binary mask of the purple region. ![](https://hackmd.io/_uploads/ByvjMAVM6.jpg =363x274) *Figure 3. the binary mask* Subsequently, use the `[centers,radii] = imfindcircles(A,radiusRange)` function to locate the circles with radii in the range specified by `radiusRange`. The function returns the $(x, y)$ coordinates of the circle centers in the binary mask. ### 2-D Perspective Transform The circle centers are correlated with the ground truth points listed in Table 1. By utilizing the coordinates of these centers obtained in the previous step, a 2-D perspective transformation is executed. Use the `fitgeotrans(movingPoints,fixedPoints,tformType)` function where the `movingPoints` are the circle center coordinates and the `fixedPoints` are the global coordinates in Table 1. The `tformType` is set to `'projective'` since the webcam view appears tilted. The function returns the transformation matrix `img2world_tform` that is capable of converting image coordinates into world coordinates. ### Identify Gameboard Corners Applying a similar methodology to the one used for locating reference points, the centers of the corners are determined. Subsequently, the conversion of these corner coordinates from the image frame to the world frame is achieved through the function `[x, y] = transformPointsForward(tform, u, v)`, utilizing the transformation matrix `img2world_tform` obtained in the preceding step. `u, v` is the corner coordinates in the image frame and `[x, y]` is the resulting corner coordinates in the world frame. In addition, the image can be transformed into a 350x550 pixel image in Figure 4. This is achieved by using a transformation matrix and the `imwrap` function. ![](https://hackmd.io/_uploads/BJVLjopMa.jpg =550x350) *Figure 4. the transformed gameboard* ### Identify Pieces on the Gameboard In order to determine the color of each piece within the grid, it's essential to pinpoint the center coordinates of each grid. This involves utilizing two transformation matrices: `grid2world_tform`, responsible for converting grid coordinates into world coordinates, and `grid2img_tform`, designed to transform grid coordinates into image coordinates. The grid frame is defined in Figure 5. ![](https://hackmd.io/_uploads/B1X8xzLGp.png =446x607) *Figure 5. the gameboard in the grid frame* Based on the previously acquired gameboard corner coordinates in both the world and image frames, we can derive the transformation matrices `grid2world_tform` and `grid2img_tform` using the same 2-D perspective transform function `fitgeotrans(movingPoints,fixedPoints,tformType)`. To obtain the color of each piece within the grid, it can be achieve by identifying the color of $(x, y) = (1+2m, 1+2n)$ in the grid frame, $m = 0,1,...,4$ and $n = 0,1,...,7$. To illustrate this process, consider obtaining the color of $(x, y) = (1, 1)$ within the grid frame. Begin by converting into the coordinates in image frame through the function `[x_img, y_img] = transformPointsForward(grid2img_tform, 1, 1)`. Subsequently, the average HSV color code within the image at coordinates `[x_img, y_img]` and its adjacent 6x6 pixel area is computed. This averaging step is to mitigate the impact of variable lighting conditions on the pieces. The corresponding color is identify by comparing the obtained color code with the pre-defined color parameters. To visualise the gameboard and the pieces, create an array where white grid areas are designated as 0, red pieces as 1, blue pieces as 2, and green pieces as 3. In addition, when identifying the green piece, which serves as the starting point, transform its coordinates from the grid frame to the world frame using the `grid2world_tform` function. This conversion is essential to inform the robot of the precise location from which to pick up the player piece. In this example, the starting coordinate is $(1, 15)$ and the destination coordinate is $(7, 7)$ in the grid frame. The array padded with 1 in this example will be: ``` current_map = [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1; 1, 0, 0, 0, 1, 0, 0, 0, 3, 1; 1, 0, 0, 0, 0, 0, 2, 0, 0, 1; 1, 0, 0, 1, 0, 1, 0, 2, 0, 1; 1, 0, 2, 2, 0, 0, 0, 0, 0, 1; 1, 0, 0, 0, 0, 2, 0, 0, 0, 1; 1, 1, 1, 1, 1, 1, 1, 1, 1, 1; ]; ``` Subsequently, the colors and center locations of the pieces can be visually represented within the image. In Figure 6, the symbol "R" corresponds to red pieces, "B" corresponds to blue pieces, and "G" corresponds to green pieces. Furthermore, the world coordinates of the four corners are displayed in red for reference. Ignore the green line in Figure 6 for now, the path will be calculated in the next step. ![](https://hackmd.io/_uploads/HJm0PiaMa.jpg) *Figure 6. the identified pieces and gameboard corners* ## Path Planning Path planning consists of two integral steps: moving the obstacles to the correct locations and the application of the Bug2 algorithm. ### Moving Obstacles The correct map in array format in this example is: ``` correct_map = [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1; 1, 0, 0, 0, 1, 0, 0, 0, 3, 1; 1, 0, 0, 0, 0, 0, 2, 2, 0, 1; 1, 0, 0, 1, 0, 1, 2, 2, 0, 1; 1, 0, 0, 2, 0, 0, 0, 0, 0, 1; 1, 0, 0, 0, 0, 0, 0, 0, 0, 1; 1, 1, 1, 1, 1, 1, 1, 1, 1, 1; ]; ``` In contrast to the current map, there are two blue pieces that have been inaccurately positioned. The algorithm is designed to identified these blue pieces and placed them to the correct locations. ### Bug2 Algorithm The Bug2 path is generated based on the correct map. The Bug2 algorithm is a robotic path-planning algorithm that is used to find a path from a starting point to a goal point in a 2D space, typically in the context of robotic navigation. It is designed for robots to navigate around obstacles and reach their destination while avoiding collisions. The algorithm works by following the boundary of obstacles until a path to the goal is found. It is particularly useful when a direct path to the goal is obstructed by obstacles, and the robot needs to find an alternative route. Bug2 algorithm is known for its simplicity and efficiency in handling complex and dynamic environments. The Bug2 function [here](https://github.com/petercorke/robotics-toolbox-matlab/blob/master/Bug2.m) developed by Peter Corke enables diagonal movement for the robot. Nonetheless, in the context of this example, diagonal movement is restricted. Consequently, a customized implementation of the Bug2 algorithm has been crafted within the code starting from line 496. The resulting path is shown in Figure 7, corresponding to the `correct_map`. This path can also be tranformed into image coordinates, as illustrated in green lines in Figure 6. ![](https://hackmd.io/_uploads/BkUw4HwQp.jpg) *Figure 7. the resulting Bug2 path*