# env_deploy(無GUP環境)
### 下載YOLO v7
``` cmd
git clone https://github.com/WongKinYiu/yolov7
```
### 下載權重檔,放入yolov7資料夾內
https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt
### 打開CMD,先把調整到YOLO v7的目錄後,輸入以下指令,安裝相關套件
```
cd yolov7
pip install -r requirements.txt
```
### 執行以下指令來測試
將原始圖片放在 inference/images 中
執行以下指令即會在runs\detect\exp看到結果
``` cmd
python detect.py --weights yolov7.pt --conf 0.25 --img-size 640 --source inference/images/horses.jpg --view-img
```
# 訓練自訂義模型
### 1.以下目錄創建yolov7_custom.yaml
C:\Users\michelin.jhong\Desktop\Yolo\keras_yolo\yolov7\cfg\training\yolov7_custom.yaml
nc為模型的類別數量

### 2.模型分類 以及 類別標籤 的設定
C:\Users\michelin.jhong\Desktop\Yolo\keras_yolo\yolov7\data\mydata.yaml
模型分類 以及 類別標籤 的設定

### 3.資料準備
以下為資料存放路徑
```
C:\Users\michelin.jhong\Desktop\Yolo\keras_yolo\yolov7\mydataset
需使用標註程式ex.makescene
```
(一圖片 + .txt檔) 為YOLO格式

### 4.為將資料集分割成訓練驗證
```
C:\Users\michelin.jhong\Desktop\Yolo\keras_yolo\yolov7\mydataset\程式.py n
n 為要取幾筆資料為驗證資料
```
```python
#程式.py
import os
import sys
import shutil
from random import sample
from collections import OrderedDict
if len(sys.argv) < 2:
print('input val num')
exit()
valNum = int(sys.argv[1])
lst = os.listdir('all')
lst.remove('classes.txt')
for f in lst:
if 'txt' not in f:
extension = f.split('.')[-1]
break
names = [i.split('.')[0] for i in lst]
names = list(OrderedDict.fromkeys(names))
valNames = sorted(sample(names, valNum))
names = sorted(list(set(names).difference(set(valNames))))
paths = [os.path.join('images', 'train'),
os.path.join('images', 'val'),
os.path.join('labels', 'train'),
os.path.join('labels', 'val')]
for p in paths:
os.makedirs(p)
trainPath = []
for fname in names:
orgImgPath = os.path.join('all', f'{fname}.{extension}')
newImgPath = os.path.join(paths[0], f'{fname}.{extension}')
trainPath.append(os.path.abspath(newImgPath) + '\n')
orgTxtPath = os.path.join('all', f'{fname}.txt')
newTxtPath = os.path.join(paths[2], f'{fname}.txt')
shutil.copy(orgImgPath, newImgPath)
shutil.copy(orgTxtPath, newTxtPath)
valPath = []
for fname in valNames:
orgImgPath = os.path.join('all', f'{fname}.{extension}')
newImgPath = os.path.join(paths[1], f'{fname}.{extension}')
valPath.append(os.path.abspath(newImgPath) + '\n')
orgTxtPath = os.path.join('all', f'{fname}.txt')
newTxtPath = os.path.join(paths[3], f'{fname}.txt')
shutil.copy(orgImgPath, newImgPath)
shutil.copy(orgTxtPath, newTxtPath)
with open('train.txt', 'w') as f:
f.writelines(trainPath)
with open('val.txt', 'w') as f:
f.writelines(valPath)
shutil.copy(os.path.join('all', 'classes.txt'), 'classes.names')
```
```
執行上述程式最終會形成右下角的樣子
```

# 使用colab 訓練
## 掛載colab
```py
from google.colab import drive
drive.mount('/content/drive')
```
## 切換位置
```python
%cd /content/drive/MyDrive/yolov7/yolov7
```
## 環境設定 Env Create
```python
!pip install --upgrade setuptools pip --user
!pip install onnx
!pip install coremltools>=4.1
```
### 查看版本
```py
import sys
import torch
print(f"Python version: {sys.version}, {sys.version_info} ")
print(f"Pytorch version: {torch.__version__} ")
```
### clone 專案
```python
!nvidia-smi
!# Download YOLOv7 code
!git clone https://github.com/WongKinYiu/yolov7
%cd yolov7
!ls
```
### 下載訓練權重
```py
Download trained weights:
!wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt
```
### 物件偵測
```py
!python detect.py --weights ./yolov7.pt --conf 0.25 --img-size 640 --source inference/images/horses.jpg
```
### 訓練模型
```python=
!python train.py --workers 8 --device 0 --batch-size 8 --data ./data/mydata.yaml --img 640 640 --cfg ./cfg/training/yolov7_custom.yaml --hyp ./data/hyp.scratch.p5.yaml --weights ./yolov7.pt
```
```
--cfg 接受模型配置的参数
--data 接收數據配置的参数
--device 0 訓練類型,我是一塊GPU 所以用0
--batch-size 8 GPU内存大小决定
--epoch 訓練次數,建議300
--weights 訓練權重
```
### 使用模型
```python=
指令:
python detect.py --weights yolov7x.pt --conf 0.25 --img-size 640 --source inference/images/
–weights :预权重路径 (detect.py的相对路径)
–source: 需训练的文件路径 (detect.py的相对路径)
–img-size:输入网络图像大小,默认640 * 640
–project:训练模型保存的位置,默认为run/train
–name: 保存项目名字,一般是run/train/exp
–save-txt :选择是否生成结果文本
–save_conf:结果文本保存相似度
```
# 其他功能
## 輸出檢測到的目標坐標信息/位置
```python
找到detect.py,在大概113行,找到plot_one_box
ctr+鼠标點擊,進入general.py,並自動定位到plot_one_box函数,修改函数为
#---------------------------------------------------------------------
def plot_one_box(x, img, color=None, label=None, line_thickness=None):
# Plots one bounding box on image img
tl = line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1 # line/font thickness
color = color or [random.randint(0, 255) for _ in range(3)]
c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3]))
cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA)
print("左上角的坐标为:(" + str(c1[0]) + "," + str(c1[1]) + "),右下角的坐标为(" + str(c2[0]) + "," + str(c2[1]) + ")")
```
### 注意
```
colab訓練的路徑要更改,還有 train.txt & val.txt 中的圖片路徑要設定正確
```
# Reference:
https://ithelp.ithome.com.tw/articles/10306298
https://blog.eddie.tw/category/yolo-v7/
https://github.com/WongKinYiu/yolov7