---
# System prepended metadata

title: FRC vision
tags: [FRC視覺辨識]

---

---
tags: FRC視覺辨識
---
# FRC vision

# ref:
* [LimeLight](https://limelightvision.io/), a hardware
* [GRIP](https://docs.wpilib.org/en/stable/docs/software/vision-processing/grip/introduction-to-grip.html), a PC application
* [FRCVision](https://docs.wpilib.org/en/stable/docs/software/vision-processing/raspberry-pi/installing-the-image-to-your-microsd-card.html) from WPILib, a raspberry pi image
* [OpenSight](https://opensight-cv.github.io/quickstart/installation/), a raspberry pi image
* [Chameleon Vision](https://chameleon-vision.readthedocs.io/en/latest/installation/coprocessor-setup.html), a server run on raspbian

## GRIP
a image processing pipeline GUI editor runs on PC
![](https://docs.wpilib.org/en/stable/_images/the-grip-user-interface.png)

## FRCVision from WPILib
很單純的camera server, stream到Driver Station上
可調整基本的相機參數 存檔匯入
![](https://i.imgur.com/CejrDSb.png)

## OpenSight
類似GRIP，跑在pi server，使用網頁介面提供GUI editor
好像很容易crash...?
![](https://opensight-cv.github.io/assets/images/simple_nodetree.png)

## Chameleon Vision
I cannot run it...
no web server is running


# build from scratch! 自幹王道!

## tools
[GRIP](https://github.com/WPIRoboticsProjects/GRIP), 又是它
![](https://docs.wpilib.org/en/stable/_images/the-grip-user-interface.png)

## 基本流程
ref: https://docs.wpilib.org/en/stable/docs/software/vision-processing/introduction/identifying-and-processing-the-targets.html

1. 架好硬體: 綠光燈圍繞在鏡頭周圍，越靠近鏡頭越好
2. camera所有數值的自動調整都關閉，並把亮度調到極低，眼裡只剩反光條反射回綠色(如果過曝會是白色)
3. (optional) blur + downsample 縮小圖片:降低運算量並去雜訊
4. 在HSV色域篩選出綠色目標
5. (optional) dilate + erode 去除小凹凸邊緣
6. 找出contour
7. 剔除過大或過小的區塊(雜訊)
8. (optional) 辨別形狀: approxPolyDP 或各種ad hoc方法
9. 由bounding box或各種ad hoc方法計算目標位置(視角的上下左右)及距離
10. 把數值用NetworkTable傳回給RoboRIO

## 3D pose estimation
ref: https://docs.opencv.org/master/d7/d53/tutorial_py_pose.html
ref: https://github.com/frc7589/Vision-2020/blob/master/pose%20estimation/pose.py

0. camera calibration 取得內部矩陣

1~8. 同'基本流程'

9. 找出頂點: cornerSubPix
10. 建立頂點在世界座標的位置
11. 將頂點的2D(影像)座標及3D(世界)座標送入solvePnP求出外部矩陣
12. 由目標點的世界座標及相機外部矩陣求出相對方位