---
title: Computer Vision
tags: software
---
# Computer Vision
Currently have code for edge detection, so we are capable of detecting the robots from their background. Will need to calibrate by testing on actual board so we only detect robot edges and no extraneous information.
Trying to get apriltags to work currently in case we need those, but it is possible we won't need apriltags as the robots shouldn't get too far out of position if we use computer vision to continuously correct their position.
Update:
Have code for tracking multiple objects. As for initially detecting the objects, have a couple options: use edge detection only and see how effective that is or use built in object detection through openCV (Haar cascade classifier). Would be slower but more accurate. Needs testing.
The robot would be bound by a rectangular box (don't want anything to collide within that box). Determine orientation by location of apriltag (on front of robot?)
Apriltag identification is working.
11/2 Update:
Initially edge detection for object detection was not working well. Considered using LEDs to outline robot for a more distinct edge, but have now switched to finding edges after color masking, which seems much more promising.
Question now is: Do we create a tracker for each object detected using above method. Do we only use above method to find bounding boxes on each frame without relying on tracker. Do we switch to motion based tracking so it only tracks the moving one (probably not because how would it detect the stationary ones to avoid collision.) Thinking 1st or 2nd.