# 24/5/2022 Notes and Progress
1. studies of the camera and he opencv
Mjpg is 24-bit
M-JPEG files are compressed using an intraframe compression technique. It is considered to be less efficient in comparison with the interframe compression. That’s why .mjpg files require more bandwidth and storage space than their MPEG competitors.
On the other hand, the M-JPEG video decompression process takes less time and any frame can be referenced independently. It means that if one of the frames gets corrupted, it is still possible to play back the video.
YUV Format output is Uncompressed video frames ( original raw After data conversion, we get YUV Format data ), Less system resources ( Because you don't have to decode ), No decoder required , The disadvantage is that the frame rate is a little slow ( Limited by USB The bandwidth of the ,USB 2.0 The rate is 480Mbps,USB 3.0 The rate is 5Gbps).YUV Images usually require a dedicated image viewing tool to open , for example PYUV.
MJPG Format output is Use the video image as JPEG The video frame obtained after format compression , The advantage is high frame rate ( Video on fast , Fast exposure ), The disadvantage is that the image has mosaic , And need a decoder , Will occupy PC System resources .MJPG Video frames are saved directly as jpg The file can be opened with common picture viewing tools .
# MAVRos and cv_bridge
cv_bridge
http://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython
mavros
http://wiki.ros.org/mavros#mavros.2BAC8-Plugins.sys_status
# Color space selection
https://www.researchgate.net/publication/265308214_Color-spaces_and_color_segmentation_for_real-time_object_recognition_in_robotic_applications
The HSV color space seems to be the best color space to use in robotic applications since it clearly separates light and chromatic information and it is the one which least distorts the latter.
https://arxiv.org/pdf/1506.01472.pdf
hence, we can say that segmentation in HSV color space is showing better performance than the segmentation in LAB* color space.
# approximate color range and some knowledge of use of Morphological Transformations
https://www.opencv-srf.com/2010/09/object-detection-using-color-seperation.html
After thresholding the image, you'll see small white isolated objects here and there. It may be because of noises in the image or the actual small objects which have the same color as our main object. These unnecessary small white patches can be eliminated by applying **morphological opening. Morphological opening** can be achieved by a erosion, followed by the dilation with the same structuring element.
Thresholded image may also have small white holes in the main objects here and there. It may be because of noises in the image. These unnecessary small holes in the main object can be eliminated by applying **morphological closing. Morphological closing** can be achieved by a dilation, followed by the erosion with the same structuring element.
Hue values of basic colors
Green 38-75
Blue 75-130
Red 160-179
# The opencv provide tracker bar api
need to provide a callback function
use to tune the optimal selection of color range
source : http://www.1zlab.com/wiki/python-opencv-tutorial/opencv-trackbar/
# keypoint to the light source
1. High contrast
2. Generalizable
3. Stable
detail : https://pyimagesearch.com/2021/04/28/opencv-color-spaces-cv2-cvtcolor/
# Problem : don't have the real image of the red flare
# For the gate
would be using feature detection
implementation video : https://www.youtube.com/watch?v=nnH55-zD38I
some information
https://blog.francium.tech/feature-detection-and-matching-with-opencv-5fd2394a590
# Problem for the drums
searching for some image dataset, not found any drums or bucket dataset
might have to collect by ours
1st presumed solution : take it by ourselves(realife), very time consuming, and tedious
2nd presumed solution : building a model(simulation) and using some script to export desired output
# Problems of the de-blue-green image
traditional way, fastest and so-far the best

next implementation:
might trying some machine learning from the github