# Markers Computation & Markers Detection #
## Marker Computation ##
- 4 markers for one Frame.
- After setting marker coordinates, robot arm is controlled to move to each coordinates to capture markers.
~~- OpenCV is used to detect marker positions~~
~~- Based on the camera calibration matrix, we can calculate the 3D poses of markers.~~
~~- From that, we can calculate the pose for each grid ???~~
- marker computation is just only compute poses of markers again following the frame thickness and the skew of frame.
- Only need to consider the X and Z-axis, because we don't move Y-axis when collect marker poses.
- And also keep don't touch the frame by robot.
- Because only having one frame, so this function is not important.
- For more than one frame, other markers are computed by this function.
## Markers Detection ##
0. Moving robot arm with center positions to rotate camera for capturing markers.
1. Move to each marker pose (which is an original pose when having only one frame).
2. Capture a marker image.
3. Take this pose.
4. Detect marker
4.1 Compute marker pose by ID
4.1.1 Get camera to hand calibration matrix 4x4
- Load HandEyeData.txt
4.1.2 Get Dictionary of aruco marker using opencv function: `getPredefinedDictionary`()
4.1.3 Convert robot to hand vector (is currently pose of robot at position to detect marker) to Homogeneous Matrix. This one is `base_T_hand`
4.1.4 Finding `base_T_cam` transformation matrix as following:
`base_T_cam = base_T_hand * cam_T_hand_4x4.inv()`
because: base_T_hand = base_T_cam * cam_T_hand.
4.1.5 Detect coordinates of marker by calling opencv `detectMarkers` to get corners and ids
4.1.6 Based on parameters: corners, marker size, intrinsic matrix, and distortion matrix, we can find rotation vector and translation vector using opencv function: `estimatePoseSingleMarkers`
- This means that this coordinate is marker coordinate based on camera coordinate, not pixel (image) coordinate.
- So, It is also transformation matrix between camera and marker.
6.1.7 Convert marker pose from Rodrigues coordinate into Homogeneous matrix to obtain `***cam_T_marker***`.
7.1.8 Find cam_T_maker offset:
- Get mk1_T_mk2 = marker.offset(x,y,z)
- offset values: 5,-5,....
- cam_T_mk_offset = cam_T_mk * mk1_T_mk2
- because: mk1Tmk2=mk1Tc * cTmk2 => cTmk2 = mk1Tc.inv() * mk1Tmk2
- cam_T_mk = cam_T_mk_offset
8.1.9 Find robot_T_marker:
- robot_T_marker = base_T_cam * cam_T_marker
===>> Return:: robot_T_marker: pose of markers.
4.2 Detect and Compute all 4 markers.
5. Marker compute
5.1 Converting Marker pose to Mat
- 4 marker poses is stored into a matrix 4x6, in which, each row contains 6 pose values (X, Y, Z, Rx, Ry, Rz) of one marker.
- Another thing, one matrix 1x6 contains one coordinate frame with X, Y, Z are token from top-left marker pose, while Rx, Ry, and Rz are averaged from 4 marker poses.
- They are saved into `save_frame_coord`
5.2 Get marker on frame coordinate (saved in `Marker_4pos_Data` file)
- Goal: re-compute transformation matrix from base to frame (bTf).
5.2.1 Re-compute frame rotation matrix (function: `RecomputeFrameRotation` in `computation.cpp` file)
- based on marker position matrix 4x6.
- Need to find the rotation matrix (actually a transformation matrix but only need rotation) which helps to find the rotation of frame.
- Ax=Z
- with:
- A: frame is un-rotation, or gripper is paralled with frame.
- x: rotation matrix
- Z: frame is skew or rotated

- From that, we can find rotation of frame.
- Procedure of that:
- Finding the un-skew coordinates of markers (A) with the Top-left marker is the original coordinates.
- Coordinates of other markers (TR, BR, BL) are calculated by coordinate distance from those markers to TL markers with Z=0:

- The coordinates here are marker positions which are calculated above.
- Then, finding 3D coordinates of markers compared with TL markers. Or this step find the 3D locations of markers TR, BR, BL with TL as the original location. (Matrix Z)

- Matrix Z is converted to 9x1, while matrix A is converted to 9x9.

- From that, we can calculate the rotation matrix x
5.2.2 Re-compute transformation matrix from frame to marker.
- Get 4 sets of base_to_marker coordinates, only get translation matrix. (t_4x1)
- frame_T_marker recomputed = base_T_frame recomputed (has calculated above) inv() * t_4x1
- Finally, want to find coordinates of markers from frame.
- bTm = bTf * fTm => fTm = bTf.inv() * bTm
- bTf get from save_frame_coord which has translation is translation X,Y,Z of TL; and rotation is each Rx, Ry, Rz is averaged rotation of TR, BR, BL.
- Maybe should use the re-computed rotation values.
They are converted into Rodrigues matrix to build 4x4 transformation matrix
- while bTm get from 4 sets of the marker coordinates from robot. Which ones were based on marker coordinates (using opencv to read) and hand-to-cam.
=>>>> fTm: get translation of frame_T_marker recomputed; while rotation matrix gets form fTm has been calculated above.
5.3 Compute and Save Grid Position:
- Based on 4 sets of marker coordinates from robot arm and size parameters of frame and grids to calculate 3D grid coordinates from robot arm.
===*** ***Actually, it only take X, Y, Z from computation; while Rx, Ry, Rz are taken from fixed values, or are set fixed values***
5.4 Save marker positions.