# 11/25 開會
###### tags: `AICA`
> [會議連結(21:30~)](https://teams.microsoft.com/l/meetup-join/19:meeting_NzlhMTMwZDUtNzU4OC00Mjg3LTk0YzQtMTVmNWE1NGIzZjk1@thread.v2/0?context=%7B%22Tid%22:%22c2e7753f-aa05-4abc-8c02-293ad122ca19%22,%22Oid%22:%22fdf9c304-4ad1-4191-955a-b0e10b7fcf43%22%7D)
---
## Mediapipe Hands
- [安裝方式](https://aijishu.com/a/1060000000182238)
- [mediapipe visualizer](https://viz.mediapipe.dev/)
- 視覺化方法了解mediapipe
- [手部提取模型](https://github.com/google/mediapipe)
- [利用圖片辨識 + 範例code](https://blog.csdn.net/stq054188/article/details/114646071)
- 動態中擷取圖片就可實作
- 實作方法
- 利用21個節點
- 困難:確認角度
- (目前還看不太懂角度的辨別
## Pygame
- [Pygame基本介紹](https://hackmd.io/@NetJagaimo/B1scMoNxS)
- 設定視窗
- 畫幾何圖形
- 基本架構
- 輸入: 鍵盤,滑鼠
- 輸出: 影像,音樂
### GUI
- method1
:bulb: [PyQt5](https://zhung.com.tw/article/pyqt%E5%85%A5%E9%96%80%E7%94%A8python%E5%AF%AB%E7%AC%AC%E4%B8%80%E6%94%AFgui/)
:bulb: [PyQT5速成教學之Qt Designer介紹與入門](https://www.it145.com/9/49744.html)
:bulb: [詳解Python GUI程式設計之PyQt5入門到實戰](https://www.796t.com/article.php?id=179975)
- 可結合python,但能不能結合pygame眾說紛紜,好像可以不用到pygame(?
- 版面設計不難,難的是如何控制按鍵要做什麼功能
- 網路上都說較難上手
- method2
:bulb: [Tkinter](https://blog.techbridge.cc/2019/09/21/how-to-use-python-tkinter-to-make-gui-app-tutorial/)
- python內建的GUI Programming
- 但版面好像要自己coding
- [Pygame和tkinter的結合-畫圓](https://www.796t.com/post/N2d3cjg=.html)
```python=
embed = tk.Frame(root, width = 500, height = 500)
#creates embed frame for pygame window
```
- method3
- 純粹靠pygame疊圖,圖的座標自行設定
- 應用pygame中滑鼠點擊事件來觸發
## 整合
==1. 把 Midiapipe Hands & Pygame 安裝到樹莓派上的方法==
### **Mediapipe**

#### [Mediapipe Hands on RPi](https://circuitdigest.com/microcontroller-projects/gesture-controlled-media-player-using-raspberry-pi-and-mediapipe)
- 現成的安裝code + 一個控制影片的範例
- 可參考的做法
- 手指數量
- 21 個點的位置座標
- ==用這個的話是不是不用再自己 train 模型了? 所以就沒有模型需要統一 format 的問題?==
#### [在樹莓派上使用MediaPipe框架](https://medium.com/@RouYunPan/%E5%9C%A8%E6%A8%B9%E8%8E%93%E6%B4%BE%E4%B8%8A%E4%BD%BF%E7%94%A8mediapipe%E6%A1%86%E6%9E%B6-fa766a243a08)
- MediaPipe框架安裝要求
- 電腦: x86/arm64架構 , OpenCV 3.x, Python 3.7.
- 樹梅派: [Ubuntu](colab.research.google.com)、[docker](https://phoenixnap.com/kb/docker-on-raspberry-pi)
#### API介紹與範例
1. STATIC_IMAGE_MODE
有false跟true兩種模式,其中false適合用來連續動態偵測,此模式會把第一張讀到的照片的手當標準,持續追蹤。
2. MAX_NUM_HANDS
要檢測的最大手數,一般設2
3. MIN_DETECTION_CONFIDENCE
成功檢測為手的信心程度
```python=
import cv2
import mediapipe as mp
mp_drawing = mp.solutions.drawing_utils
mp_drawing_styles = mp.solutions.drawing_styles
mp_hands = mp.solutions.hands
# For static images:
IMAGE_FILES = []
with mp_hands.Hands(
static_image_mode=True,
max_num_hands=2,
min_detection_confidence=0.5) as hands:
for idx, file in enumerate(IMAGE_FILES):
# Read an image, flip it around y-axis for correct handedness output (see
# above).
image = cv2.flip(cv2.imread(file), 1)
# Convert the BGR image to RGB before processing.
results = hands.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
# Print handedness and draw hand landmarks on the image.
print('Handedness:', results.multi_handedness)
if not results.multi_hand_landmarks:
continue
image_height, image_width, _ = image.shape
annotated_image = image.copy()
for hand_landmarks in results.multi_hand_landmarks:
print('hand_landmarks:', hand_landmarks)
print(
f'Index finger tip coordinates: (',
f'{hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].x * image_width}, '
f'{hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].y * image_height})'
)
mp_drawing.draw_landmarks(
annotated_image,
hand_landmarks,
mp_hands.HAND_CONNECTIONS,
mp_drawing_styles.get_default_hand_landmarks_style(),
mp_drawing_styles.get_default_hand_connections_style())
cv2.imwrite(
'/tmp/annotated_image' + str(idx) + '.png', cv2.flip(annotated_image, 1))
# Draw hand world landmarks.
if not results.multi_hand_world_landmarks:
continue
for hand_world_landmarks in results.multi_hand_world_landmarks:
mp_drawing.plot_landmarks(
hand_world_landmarks, mp_hands.HAND_CONNECTIONS, azimuth=5)
# For webcam input:
cap = cv2.VideoCapture(0)
with mp_hands.Hands(
model_complexity=0,
min_detection_confidence=0.5,
min_tracking_confidence=0.5) as hands:
while cap.isOpened():
success, image = cap.read()
if not success:
print("Ignoring empty camera frame.")
# If loading a video, use 'break' instead of 'continue'.
continue
# To improve performance, optionally mark the image as not writeable to
# pass by reference.
image.flags.writeable = False
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
results = hands.process(image)
# Draw the hand annotations on the image.
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
if results.multi_hand_landmarks:
for hand_landmarks in results.multi_hand_landmarks:
mp_drawing.draw_landmarks(
image,
hand_landmarks,
mp_hands.HAND_CONNECTIONS,
mp_drawing_styles.get_default_hand_landmarks_style(),
mp_drawing_styles.get_default_hand_connections_style())
# Flip the image horizontally for a selfie-view display.
cv2.imshow('MediaPipe Hands', cv2.flip(image, 1))
if cv2.waitKey(5) & 0xFF == 27:
break
cap.release()
```
### **Pygame**
#### Pygame on RPi
- 安裝 (不確定是哪一個)
```python=
sudo apt-get install python-pygame
```
```python=
sudo apt-get install python3-pygame
```
- 如果沒有理解錯誤的話,安裝完好像就可以直接在樹莓派執行我們寫好的 pygame 的 .py 檔
```python=
python xxxxxx.py
```
==2. 怎麼把 Mediapipe Hands 的結果傳到 Pygame 的方法==
### **Method 1 - Keyboard's Keypress**
:bulb: [code 參考連結](https://www.analyticsvidhya.com/blog/2021/06/gesture-controlled-video-game/)
- 比較簡單,但不確定在樹莓派上能不能成功的做法
- 兩個 py 檔
mediapipe_hands.py
```python=
if 手勢辨識正確:
keyboard.press(Key.right)
```
pygame.py
```python=
class Game:
def xxxxxx:
def oooooo:
def correct(self):
for event in pygame.event.get():
if (event.key == pygame.K_RIGHT):
答對的手勢亮起來
```
### **Method 2 - 用一堆待研究的 Mediapipe Hands 內建功能直接實現**
:bulb: [手部控制俄羅斯方塊 - 成果影片](https://youtu.be/sFDt9upueRE)
:bulb: [手部控制俄羅斯方塊 - Source Code](https://gist.github.com/kpkpkps/79357982a6044553baf3610ad39d0c90)
- Source Code 好像幾乎把我們要做的東西都包了
- 遊戲介面
- 手勢視窗
- 音樂
- Game Over
- 計分
- (大概只缺計時)
- 比較難,努力研究中,暫時還沒看懂
- 一個 py 檔
Source Code 裡 Mediapipe Hands 和 Pygame 的整合部分
```python=
#Set up the hand tracker
success, img = cam.read()
imgg = cv2.flip(img, 1)
imgRGB = cv2.cvtColor(imgg, cv2.COLOR_BGR2RGB)
results = hands.process(imgRGB)
if results.multi_hand_landmarks:
for handLms in results.multi_hand_landmarks:
for id, lm in enumerate(handLms.landmark):
h, w, c = imgg.shape
if id == 0:
x = []
y = []
x.append(int((lm.x) * w))
y.append(int((1 - lm.y) * h))
#This will track the hand gestures
if len(y) > 20:
if (x[0] > x[3] > x[4]) and not(y[20] > y[17]):
left_wait += 1
if not(x[0] > x[3] > x[4]) and (y[20] > y[17]):
right_wait += 1
if (x[0] > x[3] > x[4]) and (y[20] > y[17]):
rotate_wait += 1
mpDraw.draw_landmarks(imgg, handLms, mpHands.HAND_CONNECTIONS)
else:
down_wait += 1
```
## 有照流程拆的 Project 分工?