---
tags: FRC視覺辨識
---
# FRC視覺辨識
---
## 事前下載
https://docs.google.com/presentation/d/1Sa7acry4eblmslcSq-KLbn7Lrc23UFy_cmQk8rc373s/edit#slide=id.g1033c76bf6c_0_61
---
## 連入樹莓派
- 用pi imager將wpilibpi映像檔寫入SD卡
- 有線網路連接pi及電腦
- 電腦wifi `網際網路連線共用` 給 __乙太網路__
- http://wpilibpi.local/
- client mode:
- 直接接電腦: client off
- 接車上(有roboRIO): client on (設隊號)
---
## Shuffleboard
- `ShuffleBoard` 在 [`WPILib`安裝包](https://github.com/wpilibsuite/allwpilib/releases/tag/v2021.3.1) 裡
- `File` -> `Preferences` -> `NetworkTables` -> `Server` 設成 __wpilibpi.local__ \
(接車上時改回隊號)
- 2021版ShuffleBoard的bug, 不會自動開始影像: [按record按鈕](https://docs.wpilib.org/en/stable/docs/yearly-overview/known-issues.html#shuffleboard-camera-not-shown)
---
## 取得相機影像
- 從`Python Example`開始改
在while True之前, 開啟相機資料流&輸出影像資料流:
::: success
:warning: 第2行傳入getServer的字串要與網頁設定的camera name相符
:::
```python=
CS = CameraServer.getInstance()
visionCam = CS.getServer('rPi Camera 0')
h = visionCam.getVideoMode().height
w = visionCam.getVideoMode().width
input_stream = CS.getVideo(camera=visionCam)
output_stream = CS.putVideo("processed", h, w)
```
替換while True, 擷取目前影像傳入自訂函數`processImg`, 運算結果影像輸出至`output_stream`:
```python=
input_img = None
# loop forever
while True:
grab_time, input_img = input_stream.grabFrame(input_img)
if grab_time == 0:
output_stream.notifyError(input_stream.getError())
continue
output_img, output_val = processImg(input_img)
output_stream.putFrame(output_img)
time.sleep(0.01)
```
不在其他block裡的任何位置 \(例如在`if __name__ == "__main__":`前\), 處理影像的自訂函數
先直接原圖copy出來
```python=
def processImg(input_img):
output_img = np.copy(input_img)
return output_img, ()
```
因為使用到np, 在最前面import區import numpy
```python=
import numpy as np
```
---
## 顏色過濾
- 從RGB轉換至[HSV色彩空間](https://zh.wikipedia.org/wiki/HSL%E5%92%8CHSV%E8%89%B2%E5%BD%A9%E7%A9%BA%E9%97%B4)
- 使用GRIP實驗最大最小值
- Source開Webcam: 使用電腦鏡頭
- Source開IP Camera: 連接pi的影像
- URL: 從網頁預覽影像 __複製影像連結__, 例如`http://wpilibpi.local:1181/___.mjpeg`
秀出過濾後的影像:
```python=
def processImg(input_img):
output_img = np.copy(input_img)
hsv_img = cv2.cvtColor(input_img, cv2.COLOR_BGR2HSV)
binary_img = cv2.inRange(hsv_img, (45, 75, 55), (70, 255, 255))
return binary_img, ()
```
因為用到cv2, 放在前面import區
```python=
import cv2
```
---
在自訂函式`processImg`中, 選出最大的contour並計算出面積&中心位置:
```python=
im2, contour_list, hierachy = cv2.findContours(binary_img, mode=cv2.RETR_EXTERNAL, method=cv2.CHAIN_APPROX_SIMPLE)
max_area = 0
for contour in contour_list:
area = cv2.contourArea(contour)
if area < 50:
continue
cv2.drawContours(output_img, contour, -1, color = (255, 255, 255), thickness = -1)
if area > max_area:
max_area = area
max_contour = contour
output_val = (0,0,0)
if max_area > 0:
area = cv2.contourArea(max_contour)
rect = cv2.minAreaRect(max_contour)
center, size, angle = rect
center = [int(dim) for dim in center] # Convert to int so we can draw
cv2.drawContours(output_img, [np.int0(cv2.boxPoints(rect))], -1, color = (0, 0, 255), thickness = 2)
cv2.circle(output_img, center = tuple(center), radius = 3, color = (0, 0, 255), thickness = -1)
output_val = (center[0], center[1], area)
return output_img, output_val
```
在__main__進入while True前, 取得networkTable:
```python=
# Table for vision output information
vision_nt = NetworkTables.getTable('Vision')
```
在while True中, 取得output_val後, 將值傳出到networkTable:
```python=
vision_nt.putNumber('center_x', output_val[0])
vision_nt.putNumber('center_y', output_val[1])
vision_nt.putNumber('area', output_val[2])
```
因用到NetworkTables, 在import區:
刪掉
`from networktables import NetworkTablesInstance`
改成
`
from networktables import NetworkTablesInstance, NetworkTables
`
---
## 完整程式
https://hackmd.io/@wolfdigit/ry7gRk5KY
---
---
# 燈環
---
## 接線
|5V|GND|DI|
|--|---|--|
|5V|GND|GPIO18|
腳位圖:
https://docs.microsoft.com/zh-tw/windows/iot-core/learn-about-hardware/pinmappings/pinmappingsrpi
---
## SSH連入
SSH session
host: wpilibpi.local
username: pi
password: raspberry
---
## 安裝`adafruit_blinka`程式庫
時間差太多無法驗證https連線簽章, 需要先調對時間:
(誤差大約一天內尚可接受)
```shell=
sudo date -s '2021/12/05 16:00:00'
```
```shell=
sudo pip3 install adafruit_blinka
```
(如果仍然有網路連線問題):
1. 確認`網際網路連線共用`有沒有設定正確
2. 修改nameserver:
```shell=
sudo nano /etc/resolv.conf
```
將`nameserver 192.168.137.1`改成`nameserver 8.8.8.8`
---
## 點燈程式
進入編輯器:
```shell=
sudo nano light.py
```
程式:
```python=
#!/usr/bin/env python3
import board
import neopixel_write
import digitalio
import sys
luminance = 1.0
pin = digitalio.DigitalInOut(board.D18)
pin.direction = digitalio.Direction.OUTPUT
colors = [[255,255,0]]
buff = []
for i in range(16):
c = colors[0]
buff.append(int(c[0]*luminance))
buff.append(int(c[1]*luminance))
buff.append(int(c[2]*luminance))
neopixel_write.neopixel_write(pin, buff)
```
執行:
```shell=
sudo python3 light.py
```
---
---
# RoboRIO程式 (以java為例)
## 從network table讀值
宣告:
```java=
NetworkTableInstance inst;
NetworkTable RPiTable;
private NetworkTableEntry entry_centerX;
private NetworkTableEntry entry_centerY;
private NetworkTableEntry entry_area;
private NetworkTableEntry entry_x;
private NetworkTableEntry entry_y;
```
init:
```java=
inst = NetworkTableInstance.getDefault();
RPiTable = inst.getTable("Vision");
entry_centerX = RPiTable.getEntry("center_x");
entry_centerY = RPiTable.getEntry("center_y");
entry_area = RPiTable.getEntry("area");
```
periodic:
```java=
double centerX;
double centerY;
double area;
centerX = (entry_centerX.getDouble(0)-400)/400;
centerY = (entry_centerY.getDouble(0)-300)/300;
area = entry_area.getDouble(0);
```