# YOLOv11環境安裝
**1. 替yolov11創建一個新的Anaconda環境,並啟動該環境:**
先啟動miniconda環境
```
source /storage_1/KF07453/miniconda3/bin/activate
```
接著建立yolo環境
```
conda create -n yolov11 python==3.8
conda activate yolov11
```
#
**2. 下載yolov11 github**
```
git clone https://github.com/ultralytics/ultralytics.git
cd ultralytics
```
#
**3.下載ultralytics套件**
```pip install ultralytics```
#
**4.下載yolov11權重檔**
` wget https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x.pt`
#
**5.測試環境**
`yolo predict model=yolo11n.pt source='https://ultralytics.com/images/bus.jpg'`
預測結果會輸出在ultralytics/runs/detect/predict資料夾中
#
# Yolov11訓練資料
**1.建立dataset yaml檔案,提供模型訓練時須使用的訓練資料路徑及物件種類**
`touch test.yaml`
**2.訓練資料結構**
yaml檔案中,訓練資料路徑**只需要提供圖片的路徑**,無須提供標記的路徑
```
duck_yolo
├── images
├├── train (train.png)
├├── val (val.png)
├── labels
├├── train (train.txt)
├├── val (val.txt)
```
**train:** 訓練集資料
**val:** 驗證集資料
#
**3.編輯test.yaml**
貼上以下程式碼:
```
train: /storage_1/KF07453/duck_test/images/train
val: /storage_1/KF07453/duck_test/images/val
nc: 1
names: ['ducks']
```
#
# YOLOv11訓練與預測
**1.使用yolo train指令**
```yolo train model=yolo11x.pt data=test.yaml epochs=1000 batch=32 imgsz=1024 device=0,1,2,3```
[訓練參數](https://docs.ultralytics.com/modes/train/?h=setting#train-settings)可依照需求進行調整
#
**2.超參數之配置**
須直接修改default.yaml,路徑顯示在yolo cfg指令下面(灰色的文字)
```Printing '/storage_1/KF07453/miniconda3/envs/yolov11/lib/python3.8/site-packages/ultralytics/cfg/default.yaml'```
若不修改,則以預設的參數進行訓練
```
yolo cfg
vim path/to/yaml
```
#
**3.執行訓練**
訓練結果會輸出在runs/detect/train中
模型會儲存在runs/detect/train/weight/中的best.pt
超參數的設定可以在runs/detect/train/arg.yaml中查看
#
**4.模型預測**
[預測參數](https://docs.ultralytics.com/modes/predict/#inference-sources)可依照需求進行調整
```
yolo predict model='path/to/best.pt' source='path/to/.mp4' \
imgsz=1024 show_conf=False show_labels=False
```
#
**5.Tensorboard監控訓練過程與結果**
(1)先在**本機環境**開啟cmd,"將本機的16006 port連接到伺服器的6006 port"
```ssh -L 16006:127.0.0.1:6006 <使用者>@<伺服器ip>```
例如:
```ssh -L 16006:127.0.0.1:6006 KF07453@140.120.98.182```
(2)於**伺服器端**yolo環境(yolov11)安裝tensorboard,並預設模型訓練時會啟用tensorboard
```
pip install tensorboard
yolo settings tensorboard=True
```
(3)在**伺服器端**執行yolo訓練
```yolo train data=... epoch=...```
(3)開啟**另一個伺服器終端**,輸入以下指令(模型訓練輸出的位置)
```
tensorboard --logdir /storage_1/KF07453/ultralyticss/runs/train
```
(4)打開**本機**的google chrome瀏覽器,輸入以下網址即可進到tensorboard
```http://localhost:16006/```

#===============================================================
# YOLOv11 影像分割(segmentation)
**1.啟動環境**
```
conda activate yolov11
cd ultralytics
```
#
**2.根據設備與需求選擇權重檔下載**
```
wget \
https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11x-seg.pt
```

#
**3.訓練資料標記**
使用[labelme.exe](https://github.com/santanu23/labelme-windows-10-exe/releases/tag/v4.2.5)進行標記
* 選擇左上工具列-->Open dir輸入需標記圖片的資料夾(images)

* 選擇上方工具列-->File-->Change Output dir,指定輸出標記檔案的資料夾位置(json_labels)

#
* 選擇上方工具列的Create polygons進行標記

#
* 使用多邊形圈選目標物

輸出的標記檔案會與原圖片相同,副檔名為.json檔案
#
**4.將標記的json檔案轉換為yolo訓練時所需的txt檔案**
建立format_conversion.py
`vim format_conversion.py`
編輯format_conversion.py,並貼上以下程式碼:
```
import json
import os
from glob import glob
def labelme_to_yolo_segmentation(json_dir, output_dir, class_id=0):
os.makedirs(output_dir, exist_ok=True)
json_files = glob(os.path.join(json_dir, "*.json"))
for json_path in json_files:
with open(json_path, 'r') as f:
data = json.load(f)
image_height = data['imageHeight']
image_width = data['imageWidth']
yolo_annotations = []
for shape in data['shapes']:
if shape['shape_type'] == 'polygon': # Ensure it's a polygon
polygon_points = shape['points']
# Normalize polygon points
normalized_points = []
for x, y in polygon_points:
normalized_x = x / image_width
normalized_y = y / image_height
normalized_points.extend([normalized_x, normalized_y])
# Create YOLO format annotation string
annotation = f"{class_id} " + " ".join(map(str, normalized_points))
yolo_annotations.append(annotation)
# Save annotations to a corresponding txt file
base_name = os.path.splitext(os.path.basename(json_path))[0]
output_txt_path = os.path.join(output_dir, f"{base_name}.txt")
with open(output_txt_path, 'w') as out_file:
for annotation in yolo_annotations:
out_file.write(annotation + "\n")
labelme_to_yolo_segmentation(
json_dir='json_labels',
output_dir='labels'
)
```
#
**5.訓練資料結構**
建立pig_seg.yaml
`vim pig_seg.yaml`
yaml檔案中,訓練資料路徑**只需要提供圖片的路徑**,無須提供標記的路徑
```
pig_yolo_segment
├── images
├├── train (train.png)
├├── val (val.png)
├── labels
├├── train (train.txt)
├├── val (val.txt)
```
**train:** 訓練集資料
**val:** 驗證集資料
#
**6.模型訓練**
[訓練參數](https://docs.ultralytics.com/modes/train/?h=setting#train-settings)可依照需求進行調整
```
yolo train data=/storage_1/KF07453/ultralytics/data/pig-seg.yaml \
model=yolo11x-seg.pt batch=32 device=0,1,2,3 imgsz=1024 epochs=1000
```
訓練結果輸出在runs/segment/train資料夾中
模型輸出在runs/segment/train/weights中的best.pt
#
**7.模型預測**
[預測參數](https://docs.ultralytics.com/modes/predict/#inference-sources)可依照需求進行調整
```
yolo predict imgsz=1024 source=20231125_07.avi \
model=/storage_1/KF07453/ultralytics/runs/segment/train4/weights/best.pt \
show_labels=False show_conf=False line_width=1
```
預測結果輸出在runs/segment/predict資料夾中

#
**8. 輸出segment後的圖片**
可參考此[網頁](https://docs.ultralytics.com/zh/guides/isolating-segmentation-objects/)