# 目錄
# Anaconda env
## Step.1
### 環境建置介紹
#### 進入Anaconda介面,開啟Powershell Prompt

#### 複製下列程式(參考用)
#### - -name 為此環境建置名稱
#### -python 安裝版本
```javascript=
conda create --name yolov7 python=3.9
```

## Step.2
### 查看顯卡版本
```javascript=
nvidia-smi
```

### 查詢能夠安裝的 CUDA 版本
CUDA 版本連結 : https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html

## Step.3
### 透過下列pytorch連結去尋找適合的版本
pytorch 版本連結: https://pytorch.org/get-started/previous-versions/

# LabelImg 標註介紹
## Step.1
**開啟anaconda 並選擇自己建立的環境開啟 Powershell prompt**
**並進入 labelImg-master**

並輸入以下
```javascript=
python .\labelImg.py
```

## Step.2
**首先在 labelImg 左上方點擊**
#### ⓵ 開啟目錄
#### ⓶ 改變存放目錄也一樣放入相同資料夾
#### ⓷ 將此改為 YOLO格式 (預設 : txt檔)

## Step.3
按下 L 快捷鍵 標註照片的特徵
**並於上方檢視中選項按下自動儲存模式**

**標註完照片後 並結束labelImg
在資料內會看到Class跟txt檔
這樣就完成訓練前所需素材了** ✔️

# Yolov7 Install
## Step.1
**pytorch版本**
**cuda 版本:12.1**
**pytorch 版本: 2.3.1**
<font color="#f00">此版本較為穩定</font>
```
conda install pytorch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 pytorch-cuda=12.1 -c pytorch -c nvidia
```
安裝結束後 進入Powershell Prompt輸入以下指令
```
>>> python
>>> import torch
>>> torch.cuda.is_available()
>>> torch.cuda.get_device_name(0)
```

**代表安裝成功 !**
## step.2
**執行下列程式**
```
git clone https://github.com/WongKinYiu/yolov7.git
```

**進入yolov7資料夾尋找 requirements.txt**
<font color="#f00">註解掉 11 行及 12 如下方程式碼</font>
```javascript=
# Usage: pip install -r requirements.txt
# Base ----------------------------------------
matplotlib>=3.2.2
numpy>=1.18.5,<1.24.0
opencv-python>=4.1.1
Pillow>=7.1.2
PyYAML>=5.3.1
requests>=2.23.0
scipy>=1.4.1
#torch>=1.7.0,!=1.12.0
#torchvision>=0.8.1,!=0.13.0
tqdm>=4.41.0
protobuf<4.21.3
# Logging -------------------------------------
tensorboard>=2.4.1
# wandb
# Plotting ------------------------------------
pandas>=1.1.4
seaborn>=0.11.0
# Export --------------------------------------
# coremltools>=4.1 # CoreML export
# onnx>=1.9.0 # ONNX export
# onnx-simplifier>=0.3.6 # ONNX simplifier
# scikit-learn==0.19.2 # CoreML quantization
# tensorflow>=2.4.1 # TFLite export
# tensorflowjs>=3.9.0 # TF.js export
# openvino-dev # OpenVINO export
# Extras --------------------------------------
ipython # interactive notebook
psutil # system utilization
thop # FLOPs computation
# albumentations>=1.0.3
# pycocotools>=2.0 # COCO mAP
# roboflow
```
## Step.3
#### 執行requirements.txt
#### 進入 Powershell Prompt
```
pip install -r requirements.txt
```

#### 完成後進行yolov7初始化驗證
```
python detect.py
```
#### 如果有出現下列流程代表訓練完成

## Step.4
#### 自定義模型訓練
#### 在yolov7資料夾中新增mydataset資料夾
#### 於資料夾內新增all的資料夾(可看下圖)
#### mydataset資料夾內部架構
```
mydataset-
┕ all
┕ splitFile.py
```
#### splitFile.py程式碼如下
``` javascript=
import os
import sys
import shutil
from random import sample
from collections import OrderedDict
if len(sys.argv) < 2:
print('input val num')
exit()
valNum = int(sys.argv[1])
lst = os.listdir('all')
lst.remove('classes.txt')
for f in lst:
if 'txt' not in f:
extension = f.split('.')[-1]
break
names = [i.split('.')[0] for i in lst]
names = list(OrderedDict.fromkeys(names))
valNames = sorted(sample(names, valNum))
names = sorted(list(set(names).difference(set(valNames))))
paths = [os.path.join('images', 'train'),
os.path.join('images', 'val'),
os.path.join('labels', 'train'),
os.path.join('labels', 'val')]
for p in paths:
os.makedirs(p)
trainPath = []
for fname in names:
orgImgPath = os.path.join('all', f'{fname}.{extension}')
newImgPath = os.path.join(paths[0], f'{fname}.{extension}')
trainPath.append(os.path.abspath(newImgPath) + '\n')
orgTxtPath = os.path.join('all', f'{fname}.txt')
newTxtPath = os.path.join(paths[2], f'{fname}.txt')
shutil.copy(orgImgPath, newImgPath)
shutil.copy(orgTxtPath, newTxtPath)
valPath = []
for fname in valNames:
orgImgPath = os.path.join('all', f'{fname}.{extension}')
newImgPath = os.path.join(paths[1], f'{fname}.{extension}')
valPath.append(os.path.abspath(newImgPath) + '\n')
orgTxtPath = os.path.join('all', f'{fname}.txt')
newTxtPath = os.path.join(paths[3], f'{fname}.txt')
shutil.copy(orgImgPath, newImgPath)
shutil.copy(orgTxtPath, newTxtPath)
with open('train.txt', 'w') as f:
f.writelines(trainPath)
with open('val.txt', 'w') as f:
f.writelines(valPath)
shutil.copy(os.path.join('all', 'classes.txt'), 'classes.names')
```
進入mydataset資料夾位址輸入以下指令
<font color="#f00">後面____為多少為驗證數量</font>
```
python splitFile.py ___
```
#### 並在建立yolov7_custom 在yolov7/cfg/training
```javascript=
# parameters
nc: 1 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
# anchors
anchors:
- [12,16, 19,36, 40,28] # P3/8
- [36,75, 76,55, 72,146] # P4/16
- [142,110, 192,243, 459,401] # P5/32
# yolov7 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Conv, [32, 3, 1]], # 0
[-1, 1, Conv, [64, 3, 2]], # 1-P1/2
[-1, 1, Conv, [64, 3, 1]],
[-1, 1, Conv, [128, 3, 2]], # 3-P2/4
[-1, 1, Conv, [64, 1, 1]],
[-2, 1, Conv, [64, 1, 1]],
[-1, 1, Conv, [64, 3, 1]],
[-1, 1, Conv, [64, 3, 1]],
[-1, 1, Conv, [64, 3, 1]],
[-1, 1, Conv, [64, 3, 1]],
[[-1, -3, -5, -6], 1, Concat, [1]],
[-1, 1, Conv, [256, 1, 1]], # 11
[-1, 1, MP, []],
[-1, 1, Conv, [128, 1, 1]],
[-3, 1, Conv, [128, 1, 1]],
[-1, 1, Conv, [128, 3, 2]],
[[-1, -3], 1, Concat, [1]], # 16-P3/8
[-1, 1, Conv, [128, 1, 1]],
[-2, 1, Conv, [128, 1, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[[-1, -3, -5, -6], 1, Concat, [1]],
[-1, 1, Conv, [512, 1, 1]], # 24
[-1, 1, MP, []],
[-1, 1, Conv, [256, 1, 1]],
[-3, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [256, 3, 2]],
[[-1, -3], 1, Concat, [1]], # 29-P4/16
[-1, 1, Conv, [256, 1, 1]],
[-2, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[[-1, -3, -5, -6], 1, Concat, [1]],
[-1, 1, Conv, [1024, 1, 1]], # 37
[-1, 1, MP, []],
[-1, 1, Conv, [512, 1, 1]],
[-3, 1, Conv, [512, 1, 1]],
[-1, 1, Conv, [512, 3, 2]],
[[-1, -3], 1, Concat, [1]], # 42-P5/32
[-1, 1, Conv, [256, 1, 1]],
[-2, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[[-1, -3, -5, -6], 1, Concat, [1]],
[-1, 1, Conv, [1024, 1, 1]], # 50
]
# yolov7 head
head:
[[-1, 1, SPPCSPC, [512]], # 51
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[37, 1, Conv, [256, 1, 1]], # route backbone P4
[[-1, -2], 1, Concat, [1]],
[-1, 1, Conv, [256, 1, 1]],
[-2, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
[-1, 1, Conv, [256, 1, 1]], # 63
[-1, 1, Conv, [128, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[24, 1, Conv, [128, 1, 1]], # route backbone P3
[[-1, -2], 1, Concat, [1]],
[-1, 1, Conv, [128, 1, 1]],
[-2, 1, Conv, [128, 1, 1]],
[-1, 1, Conv, [64, 3, 1]],
[-1, 1, Conv, [64, 3, 1]],
[-1, 1, Conv, [64, 3, 1]],
[-1, 1, Conv, [64, 3, 1]],
[[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
[-1, 1, Conv, [128, 1, 1]], # 75
[-1, 1, MP, []],
[-1, 1, Conv, [128, 1, 1]],
[-3, 1, Conv, [128, 1, 1]],
[-1, 1, Conv, [128, 3, 2]],
[[-1, -3, 63], 1, Concat, [1]],
[-1, 1, Conv, [256, 1, 1]],
[-2, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[-1, 1, Conv, [128, 3, 1]],
[[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
[-1, 1, Conv, [256, 1, 1]], # 88
[-1, 1, MP, []],
[-1, 1, Conv, [256, 1, 1]],
[-3, 1, Conv, [256, 1, 1]],
[-1, 1, Conv, [256, 3, 2]],
[[-1, -3, 51], 1, Concat, [1]],
[-1, 1, Conv, [512, 1, 1]],
[-2, 1, Conv, [512, 1, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[-1, 1, Conv, [256, 3, 1]],
[[-1, -2, -3, -4, -5, -6], 1, Concat, [1]],
[-1, 1, Conv, [512, 1, 1]], # 101
[75, 1, RepConv, [256, 3, 1]],
[88, 1, RepConv, [512, 3, 1]],
[101, 1, RepConv, [1024, 3, 1]],
[[102,103,104], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5)
]
```
#### 建立mydata 在yolov7/data/mydata (範例)
```javascript=
# train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/]
train: ./mydataset/train.txt
val: ./mydataset/val.txt
test: ./mydataset\test.txt
# number of classes
nc: 1
# class names
names: [ 'bud eye']
```
之後在yolov7 位址中輸入
```javascript=
==============================================================================================================================================================================================
yolov7 and yolov7x
python train.py --workers 8 --device 0 --batch-size 8 --data .\data\mydata.yaml --img 640 640 --cfg .\cfg\training\yolov7_custom.yaml --hyp .\data\hyp.scratch.p5.yaml --weights .\yolov7.pt --epochs 100
==============================================================================================================================================================================================
yolov7-w6 (及其他變體)
python train_aux.py --workers 8 --device 0 --batch-size 8 --data .\data\mydata.yaml --img 640 640 --cfg .\cfg\training\yolov7-w6.yaml --hyp .\data\hyp.scratch.p5.yaml --weights .\yolov7-w6.pt --epochs 100
==============================================================================================================================================================================================
```
<font color="#f00">使用yolov7變體須修正loss函數</font>
https://drive.google.com/file/d/1tM-6DtT8qtdfKCnM4c11jXqHyeeWwRGq/view?usp=sharing
# Yolov8 Install
## Step 1.
#### 首先建立 YOLOv8 環境
#### 並開啟 YOLOv8 環境中的Powershell Prompt
```javascript=
conda create --name YOLOv8 python=3.8.20
```

## Step 2.
#### 下載 ultralytics
```javascript=
pip install ultralytics
```

## Step 3.
#### 1. Pytorch Install
#### 這邊建議換成Conda指令去安裝pytorch比較會成功
##### *Conda使用指令
```javascript=
# CUDA 12.1
conda install pytorch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 pytorch-cuda=12.1 -c pytorch -c nvidia
```
##### *pip使用指令
```javascript=
# CUDA 12.1
pip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 --index-url https://download.pytorch.org/whl/cu121
```

#### 2. 安裝過程

#### 3. 驗證 torch 是否安裝

```
>>> python
>>> import torch
>>> torch.cuda.is_available()
>>> torch.cuda.get_device_name(0)
```
## Step 4.
#### 測試模型
nc: 為訓練模型類別
```javascript=
yolo predict model=yolo11n.pt source='https://ultralytics.com/images/bus.jpg'
```

#### 恭喜安裝完成 !!!
## Step 5.
#### 如何訓練自己模型
先進入 ultralytics\cfg\models\yolov8資料夾

```javascript=
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
# Ultralytics YOLOv8 object detection model with P3/8 - P5/32 outputs
# Model docs: https://docs.ultralytics.com/models/yolov8
# Task docs: https://docs.ultralytics.com/tasks/detect
# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'
# [depth, width, max_channels]
n: [0.33, 0.25, 1024] # YOLOv8n summary: 129 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPS
s: [0.33, 0.50, 1024] # YOLOv8s summary: 129 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPS
m: [0.67, 0.75, 768] # YOLOv8m summary: 169 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPS
l: [1.00, 1.00, 512] # YOLOv8l summary: 209 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPS
x: [1.00, 1.25, 512] # YOLOv8x summary: 209 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPS
# YOLOv8.0n backbone
backbone:
# [from, repeats, module, args]
- [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
- [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
- [-1, 3, C2f, [128, True]]
- [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
- [-1, 6, C2f, [256, True]]
- [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
- [-1, 6, C2f, [512, True]]
- [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
- [-1, 3, C2f, [1024, True]]
- [-1, 1, SPPF, [1024, 5]] # 9
# YOLOv8.0n head
head:
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 6], 1, Concat, [1]] # cat backbone P4
- [-1, 3, C2f, [512]] # 12
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 4], 1, Concat, [1]] # cat backbone P3
- [-1, 3, C2f, [256]] # 15 (P3/8-small)
- [-1, 1, Conv, [256, 3, 2]]
- [[-1, 12], 1, Concat, [1]] # cat head P4
- [-1, 3, C2f, [512]] # 18 (P4/16-medium)
- [-1, 1, Conv, [512, 3, 2]]
- [[-1, 9], 1, Concat, [1]] # cat head P5
- [-1, 3, C2f, [1024]] # 21 (P5/32-large)
- [[15, 18, 21], 1, Detect, [nc]] # Detect(P3, P4, P5)
```
### 如何訓練模型
```javascript=
### 訓練模型
yolo task=detect mode=train model="D:\ultralytics\cfg\models\v8\yolov8.yaml" data="D:\ultralytics\mydata\mydata.yaml" epochs=100 batch=16
### 驗證模型
yolo task=detect mode=val model="D:\ultralytics\runs\detect\train10\weights\best.pt" data= "D:\ultralytics\mydata\mydata.yaml" epochs=100 batch=16
```
# Yolov9 Install
**Coming soon~**