# 【vMaker Edge AI專欄 #22】使用Intel OpenVINO搭配YOLOv11輕鬆駕馭姿態偵測
作者:Jack OmniXRI, 2024/10/15
![vMaker_EdgeAI_22_00](https://hackmd.io/_uploads/HyGGU_C1Jx.jpg)
相信有在玩AI影像「物件偵測」的朋友對於YOLO(Yolo Only Look Once)一定不會陌生。從2015年第一版(v1)問世至今,在各路大神的努力之下,現在已發展到第十一版(v11),而其中 v4, v7, v9 正是中研院資訊所廖弘源所長及高徒王建堯博士所貢獻的。
目前最新的 YOLOv11 [1] [2] 就是由 Ultralytic 這家公司所提出的,它是基於該公司先前提出的 v8 版本進行改良而得的,並發表於今(2024)年9月底 YOLO Vision 2024 [3] 活動中。此次這個版本延續之前 v8 版本,一樣提供了「影像分類」、「物件偵測」(含正矩形和斜矩形外框)、影像分割、物件追蹤及姿態估測(人體骨架偵測)等模型,並支援多種推論框架,包括 Google TensorFlow, PyTorch, ONNX, Apple CoreML 及 Intel OpenVINO。
Intel OpenVINO 為了讓大家更快上手,馬上就在開源範例庫 Notebooks [4] 上給出 YOLOv11 物件偵測、姿態估測及影像分割等三個案例,還可支援 Google Colab ,讓大家不用在桌機、筆電上安裝 OpenVINO 也可體驗一把。
接下來我們就跟著源碼說明[5]來了解一下如何運行 **【姿態估測】** 範例「**Convert and Optimize YOLOv11 keypoint detection model with OpenVINO™**」及動作原理。完整源碼請參考[6],點擊連結即可進到 Google Colab 環境執行。
執行前建議可先點擊選單「檔案」─「在雲端硬碟中儲存複本」,複製一份到自己的雲端硬體,方便如果想修改測試時更為方便。接著點擊選單「編輯」─「清除所有輸出內容」,方便稍後觀察運行過程中產出的內容。最後點擊選單「執行階段」─「全部執行」即可看到所有運行結果。
原則上這個範例程式可分成五大段來看,如下所示。
1. 原始 YOLOv11 推論結果
2. 轉換到 OpenVINO IR推論結果
3. 經過 NNCF 壓縮優化推論結果
4. 使用基準測試工具進行比較
5. 連續影片推論展示
這裡為方便大家學習,這裡已把完整源碼[6]步驟簡化為上述五大步驟,並將註解簡化成易懂的中文說明,新版完整範例及說明請參考下方連結。
**[完整Colab範例程式(點擊後可直接執行)](https://colab.research.google.com/github/OmniXRI/openvino_yolov11/blob/main/yolov11_keypoint_detection(sample).ipynb)**
## 1. 原始 YOLOv11 推論結果
### 1.1 安裝 Intel OpenVINO 、 Ultralytics(YOLOv11) 及必要套件包
下載及安裝 OpenVINO, NNCF, PyTorch, Ultralytic(YOLOv11), OpenCV 等相關套件包。但由於 OpenVINO 在 Colab 環境下只能在 Intel Xeon CPU 下運行,所以這裡安裝的 PyTorch 是 CPU 版本。
```python=
%pip install -q "openvino>=2024.0.0" "nncf>=2.9.0"
%pip install -q "protobuf==3.20.*" "torch>=2.1" "torchvision>=0.16" "ultralytics==8.3.0" tqdm opencv-python --extra-index-url https://download.pytorch.org/whl/cpu
```
### 1.2 下載必要函式庫並引入
下載 notebook_utils.py 到暫存區,並引入 download_file, VideoPlayer, device_widge 函式庫。
```python=
from pathlib import Path
# Fetch `notebook_utils` module
import requests
r = requests.get(
url="https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/latest/utils/notebook_utils.py",
)
open("notebook_utils.py", "w").write(r.text)
from notebook_utils import download_file, VideoPlayer, device_widget
```
### 1.3 下載測試用影像
下載網路上影像檔 intel_rnb.jpg,可自行替換 url 後影像來源路徑。
```python=
# Download a test sample
IMAGE_PATH = Path("./data/intel_rnb.jpg")
download_file(
url="https://storage.openvinotoolkit.org/repositories/openvino_notebooks/data/data/image/intel_rnb.jpg",
filename=IMAGE_PATH.name,
directory=IMAGE_PATH.parent,
)
```
### 1.4 指定推論用模型名稱並下載
建立模型名稱清單,可使用下拉盒式選取,預設名稱為為第[0]個,yolo11n-pose。
這裡可支援 yolov8 及 yolov11 模型名稱,n, s, m, l, x 分別代表模型的大小,從最小到最大,越小推論速度越快但精度略差,反之越大則越慢但精度會提高一些,可依實際需求調整。
```python=
import ipywidgets as widgets
model_id = [
"yolo11n-pose",
"yolo11s-pose",
"yolo11m-pose",
"yolo11l-pose",
"yolo11x-pose",
"yolov8n-pose",
"yolov8s-pose",
"yolov8m-pose",
"yolov8l-pose",
"yolov8x-pose",
]
model_name = widgets.Dropdown(options=model_id, value=model_id[0], description="Model")
model_name
```
### 1.5 實例化 YOLO 並測試推論
依上一步驟取得之模型名稱下載模型並實例化成 pose_model
接著直接使用該模型對先前指定好的影像進行推論,並取得姿態(關節點)結果 res (包含物件外框、類別及置信度)繪製在影像上。
另外會輸出總時間及各步驟工作所耗費時間,包含前處理(影像轉換等)、推論(姿態估測)及後處理(輸出數值及繪製結果)。
註:理論上如果只是想取得姿態估測結果,到這一步就可結束了。但這裡可先記下工作耗時及置信度,方便後面和經過 OpenVINO 處理過的內容作比較。
```python=
from PIL import Image
from ultralytics import YOLO
POSE_MODEL_NAME = model_name.value
pose_model = YOLO(f"{POSE_MODEL_NAME}.pt")
label_map = pose_model.model.names
res = pose_model(IMAGE_PATH)
Image.fromarray(res[0].plot()[:, :, ::-1])
```
輸出結果:
image 1/1 /content/data/intel_rnb.jpg: 480x640 1 person, 529.1ms
Speed: 23.2ms preprocess, 529.1ms inference, 33.4ms postprocess per image at shape (1, 3, 480, 640)
![yolov11_1](https://hackmd.io/_uploads/ByjIHrCJJe.png =320x)
## 2. 轉換到 OpenVINO IR推論結果
### 2.1 將模型轉換成 OpenVINO IR 格式
Ultralytics 本來就有支援將模型輸出成 Intel OpenVINO IR(xml + bin) 格式,只要執行下列程式即可。
轉換好的模型會存放在 /yolo11n-pose_openvino_model/ 路徑下。這裡仍保持原有 FP32 資料格式。
註:這裡要保留動態形狀(Dynamic Shape)設定為 True,方便後續工作。
```python=
# object detection model
pose_model_path = Path(f"{POSE_MODEL_NAME}_openvino_model/{POSE_MODEL_NAME}.xml")
if not pose_model_path.exists():
pose_model.export(format="openvino", dynamic=True, half=True)
```
### 2.2 選擇推論裝置
在 Google Colab 上只能選 CPU ,在本機端則還可選用Intel GPU(內顯)來提升推論速度。 不過一般為了方便,可直接設為 AUTO 讓系統自行選用即可。
```python=
device = device_widget()
device
```
### 2.3 測試單張影像
這裡的測試影像沿用步驟1.3,接著測試轉成 Intel OpenVINO IR 格式的模型是否能正確推論。
結果正確,且推論時間可減少一半以上,推論精度仍接近原來水準。
```python=
import openvino as ov # 引入 OpenVINO 函式庫
core = ov.Core() # 配置 OpenVINO 核心
pose_ov_model = core.read_model(pose_model_path) # 讀取推論用模型
ov_config = {}
if device.value != "CPU":
pose_ov_model.reshape({0: [1, 3, 640, 640]})
if "GPU" in device.value or ("AUTO" in device.value and "GPU" in core.available_devices):
ov_config = {"GPU_DISABLE_WINOGRAD_CONVOLUTION": "YES"}
pose_compiled_model = core.compile_model(pose_ov_model, device.value, ov_config) # 進行模型編譯
pose_model = YOLO(pose_model_path.parent, task="pose")
if pose_model.predictor is None:
custom = {"conf": 0.25, "batch": 1, "save": False, "mode": "predict"} # method defaults
args = {**pose_model.overrides, **custom}
pose_model.predictor = pose_model._smart_load("predictor")(overrides=args, _callbacks=pose_model.callbacks)
pose_model.predictor.setup_model(model=pose_model.model)
pose_model.predictor.model.ov_compiled_model = pose_compiled_model # 指定 OpenVINO 編譯好的模型
res = pose_model(IMAGE_PATH) # 進行推論取得結果
Image.fromarray(res[0].plot()[:, :, ::-1]) # 繪製結果
```
輸出結果:
image 1/1 /content/data/intel_rnb.jpg: 640x640 1 person, 187.4ms
Speed: 3.4ms preprocess, 187.4ms inference, 1.4ms postprocess per image at shape (1, 3, 640, 640)
![yolov11_2](https://hackmd.io/_uploads/rkk-wSAJJl.png =320x)
## 3. 經過 NNCF 壓縮優化推論結果
[NNCF](https://github.com/openvinotoolkit/nncf) 是 OpenVINO 作為模型優化的重要工具, 它提供了多種模型壓縮及優化方式,這裡僅使用到參數量化(Quantization),即將 FP32 轉換到 INT8。
### 3.1 指定是否量化
預設 to_quantize value 為 True,即要啟用量化。
```python=
import ipywidgets as widgets
int8_model_pose_path = Path(f"{POSE_MODEL_NAME}_openvino_int8_model/{POSE_MODEL_NAME}.xml")
quantized_pose_model = None
to_quantize = widgets.Checkbox(
value=True,
description="Quantization",
disabled=False,
)
to_quantize
```
### 3.2 不啟用量化處理方式
若不啟用量化則需使用 skip_kernel_extension 模組來協助略過部份工作。
```python=
# Fetch skip_kernel_extension module
r = requests.get(
url="https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/latest/utils/skip_kernel_extension.py",
)
open("skip_kernel_extension.py", "w").write(r.text)
%load_ext skip_kernel_extension
```
### 3.3 產生 NNCF 所需資料集
下載校正用資料,並設定轉換函式,產生 NNCF 所需資料集。
此步驟需較長時間,請耐心等候。
```python=
%%skip not $to_quantize.value
import nncf
from typing import Dict
from zipfile import ZipFile
from ultralytics.data.utils import DATASETS_DIR
from ultralytics.utils import DEFAULT_CFG
from ultralytics.cfg import get_cfg
from ultralytics.data.utils import check_det_dataset
from ultralytics.models.yolo.pose import PoseValidator
from ultralytics.utils.metrics import OKS_SIGMA
if not int8_model_pose_path.exists():
DATA_URL = "https://ultralytics.com/assets/coco8-pose.zip"
CFG_URL = "https://raw.githubusercontent.com/ultralytics/ultralytics/v8.1.0/ultralytics/cfg/datasets/coco8-pose.yaml"
OUT_DIR = DATASETS_DIR
DATA_PATH = OUT_DIR / "val2017.zip"
CFG_PATH = OUT_DIR / "coco8-pose.yaml"
download_file(DATA_URL, DATA_PATH.name, DATA_PATH.parent)
download_file(CFG_URL, CFG_PATH.name, CFG_PATH.parent)
if not (OUT_DIR / "coco8-pose/labels").exists():
with ZipFile(DATA_PATH, "r") as zip_ref:
zip_ref.extractall(OUT_DIR)
args = get_cfg(cfg=DEFAULT_CFG)
args.data = "coco8-pose.yaml"
pose_validator = PoseValidator(args=args)
pose_validator.data = check_det_dataset(args.data)
pose_validator.stride = 32
pose_data_loader = pose_validator.get_dataloader(OUT_DIR / "coco8-pose", 1)
pose_validator.is_coco = True
pose_validator.names = label_map
pose_validator.metrics.names = pose_validator.names
pose_validator.nc = 1
pose_validator.sigma = OKS_SIGMA
def transform_fn(data_item:Dict):
"""
Quantization transform function. Extracts and preprocess input data from dataloader item for quantization.
Parameters:
data_item: Dict with data item produced by DataLoader during iteration
Returns:
input_tensor: Input data for quantization
"""
input_tensor = pose_validator.preprocess(data_item)['img'].numpy()
return input_tensor
quantization_dataset = nncf.Dataset(pose_data_loader, transform_fn)
```
### 3.4 使用 NNCF 進行模型量化
開始進行量化,產生新 INT8 模型。
此步驟需較長時間,請耐心等候。
```python=
%%skip not $to_quantize.value
if not int8_model_pose_path.exists():
ignored_scope = nncf.IgnoredScope( # post-processing
subgraphs=[
nncf.Subgraph(inputs=[f"__module.model.{22 if 'v8' in POSE_MODEL_NAME else 23}/aten::cat/Concat",
f"__module.model.{22 if 'v8' in POSE_MODEL_NAME else 23}/aten::cat/Concat_1",
f"__module.model.{22 if 'v8' in POSE_MODEL_NAME else 23}/aten::cat/Concat_2",
f"__module.model.{22 if 'v8' in POSE_MODEL_NAME else 23}/aten::cat/Concat_7"],
outputs=[f"__module.model.{22 if 'v8' in POSE_MODEL_NAME else 23}/aten::cat/Concat_9"])
]
)
# Detection model
quantized_pose_model = nncf.quantize(
pose_ov_model,
quantization_dataset,
preset=nncf.QuantizationPreset.MIXED,
ignored_scope=ignored_scope
)
print(f"Quantized keypoint detection model will be saved to {int8_model_pose_path}")
ov.save_model(quantized_pose_model, str(int8_model_pose_path))
```
### 3.5 測試單張影像
這裡的測試影像沿用步驟1.3,接著測試經 NNCF 量化後的模型是否能正確推論。
結果正確,推論精度仍接近原來水準。
```python=
%%skip not $to_quantize.value
device
%%skip not $to_quantize.value
if quantized_pose_model is None:
quantized_pose_model = core.read_model()
ov_config = {}
if device.value != "CPU":
quantized_pose_model.reshape({0: [1, 3, 640, 640]})
if "GPU" in device.value or ("AUTO" in device.value and "GPU" in core.available_devices):
ov_config = {"GPU_DISABLE_WINOGRAD_CONVOLUTION": "YES"}
quantized_pose_compiled_model = core.compile_model(quantized_pose_model, device.value, ov_config)
pose_model = YOLO(pose_model_path.parent, task="pose")
if pose_model.predictor is None:
custom = {"conf": 0.25, "batch": 1, "save": False, "mode": "predict"} # method defaults
args = {**pose_model.overrides, **custom}
pose_model.predictor = pose_model._smart_load("predictor")(overrides=args, _callbacks=pose_model.callbacks)
pose_model.predictor.setup_model(model=pose_model.model)
pose_model.predictor.model.ov_compiled_model = pose_compiled_model
res = pose_model(IMAGE_PATH)
display(Image.fromarray(res[0].plot()[:, :, ::-1]))
```
輸出結果:
image 1/1 /content/data/intel_rnb.jpg: 640x640 1 person, 285.1ms
Speed: 4.3ms preprocess, 285.1ms inference, 2.2ms postprocess per image at shape (1, 3, 640, 640)
![yolov11_3](https://hackmd.io/_uploads/Hk6LYrCkye.png =320x)
## 4. 使用基準測試工具進行比較
### 4.1 使用基準測試工具測試
[Benchmark Tool](https://docs.openvino.ai/2024/learn-openvino/openvino-samples/benchmark-tool.html) 是 OpenVINO 提供的一組工具程式,方便連續測試用,可指定不同推論裝置、連續工作時間(預設60秒,可加 -t 指定秒數),再去計算每秒可執行次數及每次推論所需最小、最大及平均時間,方便進行性能比較。這和單張略微不同,可省去許多共同時間,如模型載入、編譯等。
若在本機端執行時可將裝置設為 GPU 進行比較。
```python=
%%skip not $to_quantize.value
device
```
### 4.2 使用 FP32 模型推論
```python=
if int8_model_pose_path.exists():
# Inference FP32 model (OpenVINO IR)
!benchmark_app -m $pose_model_path -d $device.value -api async -shape "[1,3,640,640]" -t 15
```
部份輸出結果:
…
[ INFO ] First inference took 179.61 ms
[Step 11/11] Dumping statistics report
[ INFO ] Execution Devices:['CPU']
[ INFO ] Count: 106 iterations
[ INFO ] Duration: 15499.93 ms
[ INFO ] Latency:
[ INFO ] Median: 257.80 ms
[ INFO ] Average: 292.03 ms
[ INFO ] Min: 231.82 ms
[ INFO ] Max: 484.45 ms
[ INFO ] Throughput: 6.84 FPS
### 4.3 使用 INT8 模型推論
```python=
if int8_model_pose_path.exists():
# Inference INT8 model (OpenVINO IR)
!benchmark_app -m $int8_model_pose_path -d $device.value -api async -shape "[1,3,640,640]" -t 15
```
部份輸出結果:
…
[ INFO ] First inference took 125.02 ms
[Step 11/11] Dumping statistics report
[ INFO ] Execution Devices:['CPU']
[ INFO ] Count: 150 iterations
[ INFO ] Duration: 15198.45 ms
[ INFO ] Latency:
[ INFO ] Median: 170.15 ms
[ INFO ] Average: 202.29 ms
[ INFO ] Min: 145.57 ms
[ INFO ] Max: 588.30 ms
[ INFO ] Throughput: 9.87 FPS
從上述量化前後二個結果來比較,明顯可看出量化後模型明顯推論速度快上許多,但可能受網路影響所以推論最大時間和最小時間異頗大,若移到本地端相信應該不會漂得這麼多,可以更穩定推論。
## 5. 連續影片推論展示
### 5.1 建立推論函式
run_keypoint_detection 這個函式包括模型載入、影像載入、推論及繪製結果。
```python=
import collections
import time
from IPython import display
import cv2
import numpy as np
def run_keypoint_detection(
source=0,
flip=False,
use_popup=False,
skip_first_frames=0,
model=pose_model,
device=device.value,
):
player = None
ov_config = {}
if device != "CPU":
model.reshape({0: [1, 3, 640, 640]})
if "GPU" in device or ("AUTO" in device and "GPU" in core.available_devices):
ov_config = {"GPU_DISABLE_WINOGRAD_CONVOLUTION": "YES"}
compiled_model = core.compile_model(model, device, ov_config)
if pose_model.predictor is None:
custom = {"conf": 0.25, "batch": 1, "save": False, "mode": "predict"} # method defaults
args = {**seg_model.overrides, **custom}
pose_model.predictor = pose_model._smart_load("predictor")(overrides=args, _callbacks=pose_model.callbacks)
pose_model.predictor.setup_model(model=pose_model.model)
pose_model.predictor.model.ov_compiled_model = compiled_model
try:
# Create a video player to play with target fps.
player = VideoPlayer(source=source, flip=flip, fps=30, skip_first_frames=skip_first_frames)
# Start capturing.
player.start()
if use_popup:
title = "Press ESC to Exit"
cv2.namedWindow(winname=title, flags=cv2.WINDOW_GUI_NORMAL | cv2.WINDOW_AUTOSIZE)
processing_times = collections.deque()
while True:
# Grab the frame.
frame = player.next()
if frame is None:
print("Source ended")
break
# If the frame is larger than full HD, reduce size to improve the performance.
scale = 1280 / max(frame.shape)
if scale < 1:
frame = cv2.resize(
src=frame,
dsize=None,
fx=scale,
fy=scale,
interpolation=cv2.INTER_AREA,
)
# Get the results
input_image = np.array(frame)
start_time = time.time()
detections = pose_model(input_image)
stop_time = time.time()
frame = detections[0].plot()
processing_times.append(stop_time - start_time)
# Use processing times from last 200 frames.
if len(processing_times) > 200:
processing_times.popleft()
_, f_width = frame.shape[:2]
# Mean processing time [ms].
processing_time = np.mean(processing_times) * 1000
fps = 1000 / processing_time
cv2.putText(
img=frame,
text=f"Inference time: {processing_time:.1f}ms ({fps:.1f} FPS)",
org=(20, 40),
fontFace=cv2.FONT_HERSHEY_COMPLEX,
fontScale=f_width / 1000,
color=(0, 0, 255),
thickness=1,
lineType=cv2.LINE_AA,
)
# Use this workaround if there is flickering.
if use_popup:
cv2.imshow(winname=title, mat=frame)
key = cv2.waitKey(1)
# escape = 27
if key == 27:
break
else:
# Encode numpy array to jpg.
_, encoded_img = cv2.imencode(ext=".jpg", img=frame, params=[cv2.IMWRITE_JPEG_QUALITY, 100])
# Create an IPython image.
i = display.Image(data=encoded_img)
# Display the image in this notebook.
display.clear_output(wait=True)
display.display(i)
# ctrl-c
except KeyboardInterrupt:
print("Interrupted")
# any different error
except RuntimeError as e:
print(e)
finally:
if player is not None:
# Stop capturing.
player.stop()
if use_popup:
cv2.destroyAllWindows()
```
### 5.2 取得影片進行姿態估測
呼叫 run_keypoint_detection 進行推論,可依需要改變影片來源、推論裝置及模型。
* 若在本地端運行時,將 Source 改成 0 ,就可直接支援本機(桌機或筆電)上的第1部網路攝影機。若有多部則修改為 1 到 N 。
* 若在 Colab 上遇到畫面閃爍問題時,可將 use_popup 設為 True 來改善。
* 若想多次測試不同影片、裝置(CPU / GPU / AUTO)或模型(pose_ov_model / quantized_pose_model),可直接修改參數,單獨運行本格程式即可,不用全部程式重新運行。
```python=
VIDEO_SOURCE = "https://storage.openvinotoolkit.org/repositories/openvino_notebooks/data/data/video/people.mp4"
device
run_keypoint_detection(
source=VIDEO_SOURCE,
flip=True,
use_popup=False,
model=pose_ov_model, # pose_ov_model, quantized_pose_model
device=device.value,
)
```
輸出結果:
![yolov11_4](https://hackmd.io/_uploads/BkbqOLA11x.jpg)
## 小結
相信大家看到這麼長的程式可能會想打退堂鼓,但仔細看完本文介紹後,會發覺其實只有選擇模型(原始或量化)、決定推論裝置、給定待測影像或影片就結束了,程式幾乎都不太需要改就能用,算是很容易上手。原始 YOLOv11 姿態估測模型經 OpenVINO 或 NNCF處理過後其推論速度都比原始模型快上許多,有助於邊緣裝置應用。從輸出結果(物件外框、骨架關節等)來看,並沒有因為模型變小就變得不可靠,即使人物很小也能正確判定,所以還是很值得拿來應用。另外使用 OpenVINO 還有一個好處,就是未來硬體升級也不用大改程式,只需要重新指定推論裝置即可,意思就是未來有更好的 CPU 、 GPU 或 NPU 時,程式效能自然提升,也不用擔心程式要重新開發,有興趣的朋友還不趕快動手來試試。
## 參考文獻
[1] Ultralytics, YOLOv11
https://docs.ultralytics.com/models/yolo11/
[2] Github Ultralytics / Ultralytics (YOLOv11)
https://github.com/ultralytics/ultralytics
[3] Ultralytics, YOLO Vision 2024
https://www.ultralytics.com/zh/events/yolovision
[4] Intel, OpenVINO™ Notebooks at GitHub Pages
https://openvinotoolkit.github.io/openvino_notebooks/
[5] Intel, OpenVINO Document - Learn OpenVINO - Interactive Tutorials(Python) - Convert and Optimize YOLOv11 keypoint detection model with OpenVINO
https://docs.openvino.ai/2024/notebooks/yolov11-keypoint-detection-with-output.html
[6] Intel, Github OpenVINO Notebooks - yolov11-keypoint-detection
https://colab.research.google.com/github/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/yolov11-optimization/yolov11-keypoint-detection.ipynb
## 延伸閱讀
[A] Intel OpenVINO Document - Convert and Optimize YOLOv11 real-time object detection with OpenVINO™
https://docs.openvino.ai/2024/notebooks/yolov11-object-detection-with-output.html
[B] Intel OpenVINO Document - Convert and Optimize YOLOv11 instance segmentation model with OpenVINO™
https://docs.openvino.ai/2024/notebooks/yolov11-instance-segmentation-with-output.html
**本文同步發表在[【台灣自造者 vMaker】](https://vmaker.tw/archives/category/%e5%b0%88%e6%ac%84/jack-omnixri)**
---
OmniXRI 整理製作,歡迎點贊、收藏、訂閱、留言、分享
###### tags: `vMaker` `Edge AI`