# Retrieve magazine from Grid Process Version 2 #
- Motivation:
- After markers detected, based on the camera calibration matrix we can calculate the grid poses which are poses at each grid for grasping magazine.
- However, the gripper is not exactly at the center of the end effector. So, it will need to set offset values for correct grasping poses.
- Those offset values are also need to transform with homogeneous matrix for translation and rotation values.
- input: vec_pose (grid pose: x,y,y, rx, ry, rz). This one is the pose of the end effector obtain the locations of grid.
- we cannot use directly those poses to pick magazines, we need to set and compute offset values.
## Procedure of Retrieve process: ##
1. Generate Route
1.1 Find robot_T_grid
- Converting rTgrid into Homogeneous matrix
1.2 Re-compute rTgripper_into_frame & rTgripper_rot_cam
- In which, because center of gripper is not at center of end_effector, so it is neccessary to find Tgripper_into_frame, which is used to transform robot poses to other positions: shift, grasp,...
- On the other hand, rTgripper_rot_cam because when capturing Stoppers, Markers, gripper need to be rotated 180 to make camera up.
1.3 Capture Stopper (if it is set)
- Change gripper pose to position which can capture image
- Move to front of a grid
- Capture Stopper
- Change gripper pose back to position which can grasp a magazine
1.4 Go in front of a grid
RETRIVE START:
1.5 Open Gripper
1.6 Go into Grid
1.7 Up and Grasp
1.8 Up
1.9 Go out
## Euler Vector to Homogeneous Matrix ##
`CvtEulerVector2HomoginusMatrix`
- From a vector pose (6x1) convert into transformation matrix 4x4 rTg (robot transform to grid position)
- Goal: get a grid position of the target.
- input:: a pose vector of robot arm???
- output:: a transformation matrix from robot to grid position rTg
- The operation flow:
- seperate translation and rotation values in 6 values
- declare 4 rotation matrix 3x3
- r1_mat, r2_mat, r3_mat
- and r_mat
- Check which order of Euler coordinates that robot use: XYZ, or ZYX, or ZYZ? (In Manual book of robot)
- then assign rotation values in pose vector (input para) into r matrix following the Euler coordinates order of robot arm.
- Ex: with ZYZ Euler:
Declare a vector 3x1: r
Rx-> r[2]
Rodrigues(r, r1_mat): convert to 3x3 matrix
Ry -> r[1]
Rodrigues(r, r2_mat)
Rz->r[2]
Rodrigues(r, r3_mat)
r_mat = r1_mat * r2_mat * r3_mat
r_mat is 3x3 rotation part in Homogeneous matrix.
- Then, copy translation coordinate to 3x1 translation part in Homogeneous matrix.
- This Homogeneous matrix is multiplied with offset poses before moving for grasping magazines.
## Compute Gripper Camera Re-compute Mat ##
`ComputeGripperCameraRecomputeMat`
1. What is `base_T_gripper_into_grid_4x4_recomputed`?
- Transformation matrix for gripper can go inside the grid?
- Rotation matrix is computed with function: `ComputeGripperRotationVector`
- input:
- save_position: 4 marker positions from robot arm.
- frame_R_gripper: Rotation matrix from frame to gripper. Is the angle between gripper and frame.
- output:
- base_R_gripper_recomputed: contains 3x3 rotation matrix.
- robot_rotation_vector:
- About: frame_R_gripper:
- bTg = bTf * fTg => fTg = bTf.inv() * bTg
- But, here we only need to find Rotation matrix, `fRg`
- rTf:
- from 4 poses to capture markers (save position) calculate angles from 3 axis views (z: Yaw, y: Pitch, x: Roll). It looks like the shape is skew, and find the angles of skew.
- It is considered as skew between Robot base and frame. So, it is called rTf ( or here bTf).
==>> So as, using 4 marker poses from robot, find angles between markers TL, TR, BL, BR in difference axis views, it can help to find rotation of frame, or it is also transformation matrix from base to frame.
- while bTg: is based on reference gripper angle (==**this angle is identified by trying or taking a gripper angle which can go into the grid, ex: 89.897, 89.671, -90.365**==). ~~Because 4 poses to capture markers with rotated angle 180 for camera move up~~.
- Gripper pose with can go inside the grids.
- Firs of all, using 4 marker positons, get x,y,z of top-left, top-right, bottom-left, bottom-right; then calculate the angles of x,y,z (see function: `CvtRect3dPointToRotation`)
- Then, convert into homogeneous matrix (only rotation). This one is the transformation matrix from robot to frame (rRg).
- Compute robot_R_gripper matrix: rotation matrix from robot to gripper:
rRg = rRf * fRg
fRg: is re-computed (mentioned above, about frame_R_gripper).
===>> This matrix is used to calculate poses of robot for moving (shift, grasp after multipling with homogeneous matrix of those positions)
2. What is `base_T_grip_rot_cam_4x4_recomputed`?
- Rotation matrix is computed with function: `ComputeCameraRotationVector`
- input:
- base_R_gripper_recomputed (or robot_R_gripper rRg): has computed from above,
- output:
- base_R_grip_rot_cam_recomputed:

- Function takes the angle reference at file: RobotPoses_grip_angle.txt that saves the angle values at tab Gripper: 0,0,180
robot_cam_rot_vec gets grip_cam_angle: 0,0,0
robot_to_gripper_vec: gets grip_angle: 0,0,180 ??? why 180? Camera rotation 180 when capture Markers.
- Two convert function from Euler to Homogeneous matrix are called to find robot_T_gripper:
- rTg based on Angle_Gripper_Ref (0,0,180) ?? Tren co dung khong???
- and robot_T_cam: rTc based on Angle_Cam_ref in tab camera.
=> gripper_T_cam (gTc)= rTg.inv() * rTc
this matrix is not from calibration process???
- Answer:: calibration is to find the transformation matrix between camera and end effector.
- Answer:: gripper_T_cam is real gripper (not end effector)
inv(): because: rTc = rTg * gTc
- base_R_target: transfromation matrix from robot to target (maybe cam, not target):
base_R_target = base_R_gripper * gripper_T_cam
base_R_cam
==> obtain: Rotation matrix from robot to target.
Next, need to find translation matrix from robot to target.
- Summary: need to find the rotation matrix 3x3 from robot (base) to target
- this matrix is computed from rRg and gRc.
- while rRg is based on Angle _gripper Reference in UI tab gripper
- gRc is based on Angle _ cam _ Reference in UI tab camera.
- compute Rotation matrix from gripper to cam.
- we have: bTo = bTe * eTc * cTo
- here we want to find rotation matrix
~~- bRo or bRt (in source code is base_R_target)~~
- in source code: bRt = input_base_R_gripper * gripper_T_cam
~~- gripper_T_cam: is eTc ???~~
~~- input_base_R_gripper is bTe * cTo ???~~ (maybe Not)
===>> Main purpose here is to find the transformation matrix between base and cam, in order to capture the stopper position, camera can move to a shift derived position.
************************************
- After 1. and 2.
- in source code, we have: base_T_gripper_into_grid_4x4 :: robot_T_target
- and:
- base_T_grip_rot_cam_4x4
- base === robot
- All step above are for getting a grid position of the target.
************************************
## Transform grid to gripper ##
- First of all, robot_T_gripper_into_frame is converted from Homogeneous matrix to robot vector 6x1:
- Translation gets from last column.
- Rotation Rx, Ry, Rz gets from rotation matrix which is converted in to Euler Angle.
## Get Stopper Position ##
- Rotate gripper to capture image position.
- Have to shift gripper.
- Find gripper pose after shifting by:
- shifting pose should be changed into 4x4 matrix with Rotation parameters gets zeros, while shift poses are as translation para.
- gripper_move = base_T_grip_rot_cam_4x4 * gripper_shift_4x4
- the final column of gripper_move is as translation X,Y,Z of robot move pose
- Return to grasping position of gripper (from image capture position).
- Move to Gripper Offset (0,-400,40)
## RETRIEVE ##
- Move offset values in Gripper Tab.
- Move robot with offset poses.
- Here, when finding the shift gripper pose from shift poses, the robot_T_gripper_into_frame is used
===*** Computation is only help to get matrix for translation values computation (Tx, Ty, Tz). Currently, Rx, Ry, Rz values are should be set with fixed values which correct moving (without collision)????