# 影像辨識
[TOC]
<style>
.flex-container{
display:flex;
justify-content:center;
}
</style>
## 壹、安裝環境
### (一)軟體下載安裝
#### 1. 至[這裡](https://github.com/ultralytics/yolov5),下載yolov5原始程式碼。
:::warning
需要配合安裝Python 3.6以上版本
:::
<div class="flex-container">
<img src="https://i.imgur.com/V69D7Wo.jpg"
width="100%"></div>
#### 2. 至[Anaconda官網](https://www.anaconda.com/products/individual)下載。
<div class="flex-container">
<img src="https://i.imgur.com/nTtguQ0.jpg"
width="90%"></div>
:::warning
!!兩個選項都要打勾!!
:::
<div class="flex-container">
<img src="https://i.imgur.com/DRBo9sk.jpg"
width="70%"></div>
#### 3. 至[PyCharm下載](https://www.jetbrains.com/pycharm/download/#section=windows)Community。
<div class="flex-container">
<img src="https://i.imgur.com/rke0jeY.jpg"
width="100%"></div>
---
### (二)新增虛擬環境
#### 1. anaconda安裝完後進入environment介面,點選create。
<div class="flex-container">
<img src="https://i.imgur.com/in2bFQy.jpg"
width="100%"></div>
#### 2. 建立新環境並命名,在此命名為「yolov5」;Python版本(先前下載的版本)選3.8後,點選create。
<div class="flex-container">
<img src="https://i.imgur.com/pDLRID5.jpg"
width="90%"></div>
#### 3. 第二步驟建立成功後environment介面會出現剛才建立好的「yolov5」。
<div class="flex-container">
<img src="https://i.imgur.com/N0LO40N.jpg"
width="100%"></div>
#### 4. 回到home介面,點選PyCharm Community。
:::warning
!!注意位置要在yolov5裡!!
:::
<div class="flex-container">
<img src="https://i.imgur.com/IuEiEDL.jpg"
width="100%"></div>
#### 5. 進入PyCharm,點選open。
<div class="flex-container">
<img src="https://i.imgur.com/1HGQRPN.jpg"
width="100%"></div>
#### 6. 選擇yolov5-master資料夾。
<div class="flex-container">
<img src="https://i.imgur.com/FKYSvMw.jpg"
width="70%"></div>
#### 7. 進入PyCharm後,打開File。
<div class="flex-container">
<img src="https://i.imgur.com/JAIJO0Z.jpg"
width="100%"></div>
#### 8. 點選File裡的「Settings...」。
<div class="flex-container">
<img src="https://i.imgur.com/DDk77Kf.jpg"
width="50%"></div>
#### 9. 找到「Project」後,點選「Python Interpreter」,按下設定圖示。
<div class="flex-container">
<img src="https://i.imgur.com/wYUwAOt.jpg"
width="100%"></div>
#### 10. 選擇「Add...」。
<div class="flex-container">
<img src="https://i.imgur.com/cWyR8ou.jpg"
width="100%"></div>
#### 11. 在「Conda Environment」裡點選「Existing environment」,確認路徑是剛才建立的虛擬環境「yolov5」後按下「OK」。
<div class="flex-container">
<img src="https://i.imgur.com/w09hh74.jpg"
width="100%"></div>
---
### (三)安裝常用工具
#### 1. 在anaconda的home介面裡,Install「CMD.exe Prompt」。
<div class="flex-container">
<img src="https://i.imgur.com/W6q8xNt.jpg"
width="100%"></div>
#### 2. 點選「CMD.exe Prompt」。
<div class="flex-container">
<img src="https://i.imgur.com/UPn7Xbb.jpg"
width="100%"></div>
#### 3. 進入「CMD.exe Prompt」後,進行指令指令輸入。
```
conda install git
```
<div class="flex-container">
<img src="https://i.imgur.com/RRT0DMQ.jpg"
width="90%"></div>
#### 4.上步驟指令跑完後,再執行下方指令。
```
conda install pip
```
<div class="flex-container">
<img src="https://i.imgur.com/03Xsdvz.jpg"
width="90%"></div>
:::info
第3、4步驟都需時間執行。
:::
#### 5. 回到Environments,點選「yolov5」,並更新擴充軟體。
<div class="flex-container">
<img src="https://i.imgur.com/at7DirS.jpg"
width="100%"></div>
#### 6. 下載opencv。
<div class="flex-container">
<img src="https://i.imgur.com/aXHXz3P.jpg"
width="100%"></div>
---
### (四)安裝Cuda & Cudnn
#### 1. 尋找自己電腦GPU。
<div class="flex-container">
<img src="https://i.imgur.com/3x3hY6M.jpg"
width="80%"></div>
#### 2. 更新[NVDIA DRIVER](https://www.nvidia.com.tw/Download/index.aspx?lang=tw)。
:::warning
記得「產品家族」要選擇跟自己電腦GPU一樣的
:::
<div class="flex-container">
<img src="https://i.imgur.com/St7nIZf.jpg"
width="90%"></div>
#### 3. 點[這裡](https://en.wikipedia.org/wiki/CUDA)參考Cuda GPU支援狀況。
:::warning
GPU支援狀況,數字越大,運算越快,原則上3以上就沒問題了
:::
<div class="flex-container">
<img src="https://i.imgur.com/wjFH9xo.jpg"
width="100%"></div>
#### 4. 下載[Cuda10.2](https://developer.nvidia.com/cuda-10.2-download-archive?target\_os=Windows&target\_arch=x86\_64&target\_version=10&target_type=exelocal)。
<div class="flex-container">
<img src="https://i.imgur.com/BVaRRI8.jpg"
width="100%"></div>
#### 5. 建在C槽下。
<div class="flex-container">
<img src="https://i.imgur.com/d7tl9kM.jpg"
width="80%"></div>
#### 6. 下載[Cudnn10.2](https://developer.nvidia.com/rdp/cudnn-download)。
<div class="flex-container">
<img src="https://i.imgur.com/wu9DJ24.jpg"
width="100%"></div>
#### 7. 解壓縮到C槽。
<div class="flex-container">
<img src="https://i.imgur.com/TbqqIpz.jpg"
width="90%"></div>
#### 8. 點開「cuda」資料夾後可以看到如下圖內容。
<div class="flex-container">
<img src="https://i.imgur.com/SeEEzaz.jpg"
width="90%"></div>
#### 9. 找到如以下圖片的路徑。
<div class="flex-container">
<img src="https://i.imgur.com/LjvZ9uU.jpg"
width="100%"></div>
#### 10. 將8的bin裡面的內容複製到9的bin。
<div class="flex-container">
<img src="https://i.imgur.com/CCh7Y11.jpg"
width="100%"></div>
#### 11. 將8的include裡面的內容複製到9的include。
<div class="flex-container">
<img src="https://i.imgur.com/rSayQl7.jpg"
width="100%"></div>
#### 12. 將8的\lib\x64裡面的內容複製到9的\lib\x64。
<div class="flex-container">
<img src="https://i.imgur.com/E7A3Mjk.jpg"
width="100%"></div>
#### 13.新增環境變數。
<div class="flex-container">
<img src="https://i.imgur.com/5l3vLMt.jpg"
width="90%"></div>
#### 14. 點選「新增」後,如下輸入。
```
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\lib\x64
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\include
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\extras\CUPTI\lib64
C:\ProgramData\NVIDIA Corporation\CUDA Samples\v10.2\bin\win64
C:\ProgramData\NVIDIA Corporation\CUDA Samples\v10.2\common\lib\x64
```
<div class="flex-container">
<img src="https://i.imgur.com/B6nGNbC.jpg"
width="100%"></div>
#### 15.確定系統變數有如下圖的兩個。
<div class="flex-container">
<img src="https://i.imgur.com/jYOdv8N.jpg"
width="100%"></div>
#### 16. 打開cmd,指令輸入進行驗證。
```
nvcc -V
```
<div class="flex-container">
<img src="https://i.imgur.com/s4dHkYr.jpg"
width="100%"></div>
---
### (五)安裝PyTorch
:::warning
PyTorch版本必須選擇1.7.0以上
:::
#### 1. 相關[PyTorch](https://pytorch.org/get-started/previous-versions/)指令,照著以下步驟輸入指令。
```
activate yolov5
```
<div class="flex-container">
<img src="https://i.imgur.com/YA9MW7w.jpg"
width="90%"></div>
#### 2. 選擇v1.7.1版本裡,複製「CUDA 10.2」安裝指令。
```
# CUDA 10.2
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.2 -c pytorch
```
<div class="flex-container">
<img src="https://i.imgur.com/OxZIbjA.jpg"
width="100%"></div>
:::info
此步驟需時間執行。
:::
#### 3. 打開cmd,指令輸入進行驗證。(可依照下面程式碼複製)
```
activate yolov5
python
import torch as t
t.__version__
```
<div class="flex-container">
<img src="https://i.imgur.com/AXBx4to.jpg"
width="100%"></div>
---
### (六)安裝requirements文件內容
#### 1. 到「yolov5_master」的資料夾裡找尋「requirements」的文件,並複製紅框的內容。
<div class="flex-container">
<img src="https://i.imgur.com/NxlZOMr.jpg"
width="100%"></div>
#### 2. 將複製的複製的內容貼在PyCharm的terminal裡。
<div class="flex-container">
<img src="https://i.imgur.com/2BQ6ddR.jpg"
width="100%"></div>
---
### (七)下載權重檔案
#### 1. 至[這裡](https://github.com/ultralytics/yolov5/releases/tag/v5.0)下載權重檔案
<div class="flex-container">
<img src="https://i.imgur.com/UIpC1Yi.jpg"
width="50%"></div>
#### 2. 放在「yolov5-master」資料夾下
<div class="flex-container">
<img src="https://i.imgur.com/MPz4y6A.jpg"
width="100%"></div>
---
### (八)測試
#### 1. 執行`detect.py`
<div class="flex-container">
<img src="https://i.imgur.com/u4FmWIf.jpg"
width="100%"></div>
#### 2. 測試照片影片存放路徑。
<div class="flex-container">
<img src="https://i.imgur.com/QUFkitn.jpg"
width="100%"></div>
#### 3. 測試"結果"存放路徑。
<div class="flex-container">
<img src="https://i.imgur.com/F57bjTn.jpg"
width="100%"></div>
#### 4. 測試結果
a.
<div class="flex-container">
<img src="https://i.imgur.com/rpgmQLj.jpg"
width="80%"></div>
b.
<div class="flex-container">
<img src="https://i.imgur.com/hbkKJto.jpg"
width="70%"></div>
c.
<div class="flex-container">
<img src="https://i.imgur.com/h8fjlnB.jpg"
width="70%"></div>
d.
<div class="flex-container">
<img src="https://i.imgur.com/u4Fg37M.jpg"
width="70%"></div>
e.
<div class="flex-container">
<img src="https://i.imgur.com/5JqzAkY.jpg"
width="80%"></div>
5.連接webcam測試
<div class="flex-container">
<img src="https://i.imgur.com/06YC8AI.jpg"
width="100%"></div>
---
## 貳、訓練
### (一)安裝pycocotools
#### 1. 點此下載[Microsoft Visual C++ Build Tools 2015](https://www.microsoft.com/zh-TW/download/details.aspx?id=48159)。
<div class="flex-container">
<img src="https://i.imgur.com/Pu7gBC3.jpg"
width="90%"></div>
#### 2. 點此[下載pycocotools的.whl檔案](https://pypi.tuna.tsinghua.edu.cn/simple/pycocotools-windows/)。
(原始檔:https://github.com/cocodataset/cocoapi)
<div class="flex-container">
<img src="https://i.imgur.com/Y5pjswm.jpg"
width="70%"></div>
#### 3. 存放路徑。
<div class="flex-container">
<img src="https://i.imgur.com/3xRFeN1.jpg"
width="100%"></div>
#### 4. 打開cmd後,進行指令輸入。
```
activate yolov5
pip install C:\Users\Ching\anaconda3\lib\pycocotools_windows-2.0.0.2-cp38-cp38-win_amd64.whl
```
:::warning
pip install 後的路徑為pycocotools_windows-2.0.0.2-cp38-cp38-win_amd64.whl的存放路徑。
:::
#### 5.
```
pip install scikit-image
```
<div class="flex-container">
<img src="https://i.imgur.com/UHNNQJb.jpg"
width="100%"></div>
#### 6.
```
pip intall pycocotools
```
<div class="flex-container">
<img src="https://i.imgur.com/jFc5jbz.jpg"
width="100%"></div>
#### 7. 指令輸入進行驗證。
輸入```pip list```,確認pycocotools、scikit-image。
<div class="flex-container">
<img src="https://i.imgur.com/AkzoYbU.jpg"
width="70%"></div>
### (二)安裝apex
#### 1. 點此[下載apex原檔](https://github.com/NVIDIA/apex)。
<div class="flex-container">
<img src="https://i.imgur.com/ZxxB3uR.jpg"
width="100%"></div>
#### 2. 存放位置。
<div class="flex-container">
<img src="https://i.imgur.com/dyDonq7.jpg"
width="80%"></div>
#### 3. 打開cmd,進行指令輸入。
```
activate yolov5
cd apex-master
python setup.py install
```
<div class="flex-container">
<img src="https://i.imgur.com/JwMvcVl.jpg"
width="100%"></div>
#### 4. 指令輸入進行驗證。
輸入```pip list```,有顯示apex表安裝成功。
<div class="flex-container">
<img src="https://i.imgur.com/rgbPCW8.jpg"
width="100%"></div>
### (三)標記資料集
#### 1. 新增資料夾。
<div class="flex-container">
<img src="https://i.imgur.com/nNGGrXe.jpg"
width="100%"></div>
:::info
說明:
[data]
Annotations:存放的是標記後生成的xml檔案,進行 detection 任務時的標籤檔案,xml 形式,檔名與圖片名一一對應。
images:存放 .jpg 格式的圖片檔案。
ImageSets:存放的是訓練資料集和測試資料集的分類情況,是分類和檢測的資料集分割檔案,包含train.txt, val.txt,trainval.txt,test.txt。
labels:存放的是儲存標記內容的txt檔案,label標註資訊的txt檔案,與圖片一一對應。
-----
[ImageSets] (train,val,test建議按照8:1:1比例劃分)
train.txt:寫著用於訓練的圖片名稱。
val.txt:寫著用於驗證的圖片名稱。
trainval.txt:train與val的合集。
test.txt:寫著用於測試的圖片名稱。
:::
#### 2.點此下載[精靈標記助手(Colabeler)](http://www.jinglingbiaozhu.com/)。
<div class="flex-container">
<img src="https://i.imgur.com/vkjI2Qn.jpg"
width="100%"></div>
#### 3.照片存放路徑(D:/yolov5-master/data/images)。
<div class="flex-container">
<img src="https://i.imgur.com/RPJt1sA.jpg"
width="100%"></div>
#### 4.新建->位置標注->分類值(辨識人臉名字)->照片存放路徑,確認無誤按「創建」。
<div class="flex-container">
<img src="https://i.imgur.com/hCElt69.jpg"
width="100%"></div>
#### 5.框臉後在右邊標注信息欄選擇相對應的名字,並按下下方「打勾」進行儲存,每框完一張都要按完打勾才能跳下一張,不然剛才框的資料會不見。
<div class="flex-container">
<img src="https://i.imgur.com/O4yBSeu.jpg"
width="100%"></div>
#### 6.全部照片都框完以後,按下左下方「導出」,輸出方式選擇「pascal-voc」。
<div class="flex-container">
<img src="https://i.imgur.com/PjgqMwB.jpg"
width="100%"></div>
#### 7.導出後的xml檔案存放路徑。
<div class="flex-container">
<img src="https://i.imgur.com/fnZadpW.jpg"
width="100%"></div>
### (四)構建資料集
#### 1. 在yolov5-master的根目錄下新建檔案 `makeTxt.py`、`voc_label.py`
建立完成後執行這兩個檔案
:::info
「makeTxt.py」
makeTxt.py主要是將資料集分類成訓練資料集和測試資料集。
預設train,val,test按照8:1:1的比例進行隨機分類。
執行後ImagesSets資料夾中會出現四個檔案,主要是生成的訓練資料集和測試資料集的圖片名稱,同時data目錄下也會出現這四個檔案,內容是訓練資料集和測試資料集的圖片路徑。
:::
:::info
「voc_label.py」
classes=[……] :填入自己在資料集中所標註的類別名稱,填寫錯誤的話會造成讀取不出xml檔案裡的標註資訊。
voc_label.py主要是將圖片資料集標註後的xml檔案中的標註資訊讀取出來並寫入txt檔案。
執行後在labels資料夾中出現所有圖片資料集的標註資訊。
:::
#### 2. makeTxt執行後ImagesSets資料夾中會出現四個檔案。
<div class="flex-container">
<img src="https://i.imgur.com/G85mIPf.jpg"
width="100%"></div>
#### 3. voc_label執行後labels資料夾中出現所有圖片的標註資訊。
<div class="flex-container">
<img src="https://i.imgur.com/iWWwzOi.jpg"
width="100%"></div>
#### 4. makeTxt程式檔。
```python=
import os
import random
trainval_percent = 0.9
train_percent = 0.9
xmlfilepath = 'data/Annotations'
txtsavepath = 'data/ImageSets'
total_xml = os.listdir(xmlfilepath)
num = len(total_xml)
list = range(num)
tv = int(num * trainval_percent)
tr = int(tv * train_percent)
trainval = random.sample(list, tv)
train = random.sample(trainval, tr)
ftrainval = open('data/ImageSets/trainval.txt', 'w')
ftest = open('data/ImageSets/test.txt', 'w')
ftrain = open('data/ImageSets/train.txt', 'w')
fval = open('data/ImageSets/val.txt', 'w')
for i in list:
name = total_xml[i][:-4] + '\n'
if i in trainval:
ftrainval.write(name)
if i in train:
ftrain.write(name)
else:
fval.write(name)
else:
ftest.write(name)
ftrainval.close()
ftrain.close()
fval.close()
ftest.close()
```
#### 5. voc_label程式檔。
```python=
# xml解析包
import xml.etree.ElementTree as ET
import pickle
import os
# os.listdir() 方法用於返回指定的資料夾包含的檔案或資料夾的名字的列表
from os import listdir, getcwd
from os.path import join
sets = ['train', 'test', 'val']
classes = ['Angel', 'Sophia', 'Jenny', 'Jim', 'Tina']
# 進行歸一化操作
def convert(size, box): # size:(原圖w,原圖h) , box:(xmin,xmax,ymin,ymax)
dw = 1./size[0] # 1/w
dh = 1./size[1] # 1/h
x = (box[0] + box[1])/2.0 # 物體在圖中的中心點x座標
y = (box[2] + box[3])/2.0 # 物體在圖中的中心點y座標
w = box[1] - box[0] # 物體實際畫素寬度
h = box[3] - box[2] # 物體實際畫素高度
x = x*dw # 物體中心點x的座標比(相當於 x/原圖w)
w = w*dw # 物體寬度的寬度比(相當於 w/原圖w)
y = y*dh # 物體中心點y的座標比(相當於 y/原圖h)
h = h*dh # 物體寬度的寬度比(相當於 h/原圖h)
return (x, y, w, h) # 返回 相對於原圖的物體中心點的x座標比,y座標比,寬度比,高度比,取值範圍[0-1]
# year ='2012', 對應圖片的id(檔名)
def convert_annotation(image_id):
'''
將對應檔名的xml檔案轉化為label檔案,xml檔案包含了對應的bunding框以及圖片長款大小等資訊,
通過對其解析,然後進行歸一化最終讀到label檔案中去,也就是說
一張圖片檔案對應一個xml檔案,然後通過解析和歸一化,能夠將對應的資訊儲存到唯一一個label檔案中去
labal檔案中的格式:calss x y w h 同時,一張圖片對應的類別有多個,所以對應的bunding的資訊也有多個
'''
# 對應的通過year 找到相應的資料夾,並且開啟相應image_id的xml檔案,其對應bund檔案
in_file = open('data/Annotations/%s.xml' % (image_id), encoding='utf-8')
# 準備在對應的image_id 中寫入對應的label,分別為
# <object-class> <x> <y> <width> <height>
out_file = open('data/labels/%s.txt' % (image_id), 'w', encoding='utf-8')
# 解析xml檔案
tree = ET.parse(in_file)
# 獲得對應的鍵值對
root = tree.getroot()
# 獲得圖片的尺寸大小
size = root.find('size')
# 如果xml內的標記為空,增加判斷條件
if size != None:
# 獲得寬
w = int(size.find('width').text)
# 獲得高
h = int(size.find('height').text)
# 遍歷目標obj
for obj in root.iter('object'):
# 獲得difficult ??
difficult = obj.find('difficult').text
# 獲得類別 =string 型別
cls = obj.find('name').text
# 如果類別不是對應在我們預定好的class檔案中,或difficult==1則跳過
if cls not in classes or int(difficult) == 1:
continue
# 通過類別名稱找到id
cls_id = classes.index(cls)
# 找到bndbox 物件
xmlbox = obj.find('bndbox')
# 獲取對應的bndbox的陣列 = ['xmin','xmax','ymin','ymax']
b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text),
float(xmlbox.find('ymax').text))
print(image_id, cls, b)
# 帶入進行歸一化操作
# w = 寬, h = 高, b= bndbox的陣列 = ['xmin','xmax','ymin','ymax']
bb = convert((w, h), b)
# bb 對應的是歸一化後的(x,y,w,h)
# 生成 calss x y w h 在label檔案中
out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')
# 返回當前工作目錄
wd = getcwd()
print(wd)
for image_set in sets:
'''
對所有的檔案資料集進行遍歷
做了兩個工作:
1.將所有圖片檔案都遍歷一遍,並且將其所有的全路徑都寫在對應的txt檔案中去,方便定位
2.同時對所有的圖片檔案進行解析和轉化,將其對應的bundingbox 以及類別的資訊全部解析寫到label 檔案中去
最後再通過直接讀取檔案,就能找到對應的label 資訊
'''
# 先找labels資料夾如果不存在則建立
if not os.path.exists('data/labels/'):
os.makedirs('data/labels/')
# 讀取在ImageSets/Main 中的train、test..等檔案的內容
# 包含對應的檔名稱
image_ids = open('data/ImageSets/%s.txt' % (image_set)).read().strip().split()
# 開啟對應的2012_train.txt 檔案對其進行寫入準備
list_file = open('data/%s.txt' % (image_set), 'w')
# 將對應的檔案_id以及全路徑寫進去並換行
for image_id in image_ids:
list_file.write('data/images/%s.jpg\n' % (image_id))
# 呼叫 year = 年份 image_id = 對應的檔名_id
convert_annotation(image_id)
# 關閉檔案
list_file.close()
# os.system(‘comand’) 會執行括號中的命令,如果命令成功執行,這條語句返回0,否則返回1
# os.system("cat 2007_train.txt 2007_val.txt 2012_train.txt 2012_val.txt > train.txt")
# os.system("cat 2007_train.txt 2007_val.txt 2007_test.txt 2012_train.txt 2012_val.txt > train.all.txt")A
```
### (五)檔案修改
#### 1. 首先在data目錄下,複製貼上一份coco.yaml檔案,此次訓練事先做我們組員的臉部辨識的部分,所以將其重新命名face.yaml。
::: info
其中train,val,test後面分別為訓練集和測試集圖片的路徑,nc為資料集的類別數,我們這裡分了5類,names為類別的名稱。
:::
<div class="flex-container">
<img src="https://i.imgur.com/hg4VUEm.jpg"
width="100%"></div>
#### 2. face.yaml程式碼修改
<div class="flex-container">
<img src="https://i.imgur.com/0mXQyYy.jpg"
width="100%"></div>
#### 3. 修改yolov5s.yaml檔案。
<div class="flex-container">
<img src="https://i.imgur.com/bun2jxT.jpg"
width="100%"></div>
:::info
接著在models目錄下的yolov5s.yaml檔案進行修改,這裡取決於你使用了哪個模型就去修改對於的檔案,這裡使用的是yolov5s模型
:::
#### 4. 修改train.py檔案。
<div class="flex-container">
<img src="https://i.imgur.com/sDNbIdX.jpg"
width="100%"></div>
### (六)開始訓練
#### 1.開始訓練。
:::info
直接執行train.py檔案開始訓練,出現此畫面就是成功開始訓練了
:::
<div class="flex-container">
<img src="https://i.imgur.com/c2xvuje.jpg"
width="100%"></div>
#### 2.訓練完成。
<div class="flex-container">
<img src="https://i.imgur.com/AQSAd2V.jpg"
width="100%"></div>
#### 3.
:::info
訓練好後會在runs/train/exp資料夾得到以下檔案。
我們訓練好的權重為weights資料夾中的best.pt和last.pt檔案,
best.pt是訓練300輪後所得到的最好的權重,last.pt是最後一輪訓練所得到的權重。
:::
`runs\train\exp`
<div class="flex-container">
<img src="https://i.imgur.com/Nl7gBf4.jpg"
width="100%"></div>
`runs\train\exp\weight`
<div class="flex-container">
<img src="https://i.imgur.com/wfKIInY.jpg"
width="100%"></div>
### (七)問題解決方法
#### 1. 分頁檔太小,無法完成操作。
:::info
將workers的默認值改為1。
:::
<div class="flex-container">
<img src="https://i.imgur.com/aRHw3mr.jpg"
width="100%"></div>
#### 2. ImportError DLL load failed while importing _ctypes 發生內部錯誤。
:::info
解安裝pillow後,再重新安裝一次。
:::
<div class="flex-container">
<img src="https://i.imgur.com/8dF4hx6.jpg"
width="90%"></div>
#### 3. RuntimeError CUDA out of memory。
:::info
將batch-size的默認值調小至電腦跑得動。
:::
<div class="flex-container">
<img src="https://i.imgur.com/CA2Me7N.jpg"
width="100%"></div>
### (八)訓練結果
#### 1.在"D:/yolov5-master/data" 裡新增「imagestest」資料夾。
<div class="flex-container">
<img src="https://i.imgur.com/jC1VjmW.jpg"
width="100%"></div>
#### 2.複製best.pt到yolov5-master根目錄下。
<div class="flex-container">
<img src="https://i.imgur.com/P3YsU3f.jpg"
width="100%"></div>
#### 3.修改 detect . py。
<div class="flex-container">
<img src="https://i.imgur.com/4rSZDDw.jpg"
width="100%"></div>
#### 4.測試照片影片存放路徑(在新建的imagestest裡)。
<div class="flex-container">
<img src="https://i.imgur.com/BgwSGOd.jpg"
width="100%"></div>
#### 5.測試"結果"存放路徑。
<div class="flex-container">
<img src="https://i.imgur.com/mfgzgnu.jpg"
width="100%"></div>
#### 6.測試結果
<div class="flex-container">
<img src="https://i.imgur.com/vclDse9.jpg"
width="70%"></div>
<div class="flex-container">
<img src="https://i.imgur.com/kbGisu4.jpg"
width="70%"></div>
<div class="flex-container">
<img src="https://i.imgur.com/CIfSVIV.jpg"
width="70%"></div>
<div class="flex-container">
<img src="https://i.imgur.com/WgoXlLj.jpg"
width="70%"></div>
<div class="flex-container">
<img src="https://i.imgur.com/88BE6AI.jpg"
width="70%"></div>
<div class="flex-container">
<img src="https://i.imgur.com/MRkjuDx.jpg"
width="60%"></div>
:::info
下圖判斷Jenny的臉不成功
:::
<div class="flex-container">
<img src="https://i.imgur.com/C2VRotg.jpg"
width="70%"></div>
### (九)tensorboard視覺化訓練
#### 1.開始訓練以後可以在"D:/yolov5-master/run/train/exp"找到以下檔案。
<div class="flex-container">
<img src="https://i.imgur.com/Bsz5A0K.jpg"
width="100%"></div>
#### 2.在"D:/yolov5-master"下直接輸入cmd,進入cmd。
<div class="flex-container">
<img src="https://i.imgur.com/oJyuZvc.jpg"
width="100%"></div>
#### 3. 指令輸入。
```
activate yolov5
tensorboard --logdir runs/train/exp
```
<div class="flex-container">
<img src="https://i.imgur.com/AdxwRlH.jpg"
width="100%"></div>
#### 4. 將上一步驟得到的網址在瀏覽器開啟。
```
http://localhost:6006/
```
<div class="flex-container">
<img src="https://i.imgur.com/81ywKda.jpg"
width="100%"></div>
#### 5. 結果。
##### a.metrics
<div class="flex-container">
<img src="https://i.imgur.com/IOXKkAX.jpg"
width="100%"></div>
##### b.train
<div class="flex-container">
<img src="https://i.imgur.com/wuluofc.jpg"
width="100%"></div>
##### c.val
<div class="flex-container">
<img src="https://i.imgur.com/Ei3Ex1x.jpg"
width="100%"></div>
##### d.x
<div class="flex-container">
<img src="https://i.imgur.com/QxGcBqW.jpg"
width="100%"></div>
### (十)權重
#### 1. 權重圖。
<div class="flex-container">
<img src="https://i.imgur.com/1gbAcbw.jpg"
width="100%"></div>
#### 2. YoloV5權重。
depth_multiple 代表網路的深度
width_multiple 代表網路的寬度
網路的深度:yolov5s < yolov5m < yolov5l < yolov5x
網路的寬度:yolov5s < yolov5m < yolov5l < yolov5x
但在此電腦的顯示卡效能最多只能跑yolov5s、yolov5m
==yolov5l== <div class="flex-container"><img src="https://i.imgur.com/GNFvnCS.jpg"
width="100%">
</div>
==yolov5m== <div class="flex-container"><img src="https://i.imgur.com/YYUhCSc.jpg"
width="100%"></div>
==yolov5s== <div class="flex-container"><img src="https://i.imgur.com/AQJMNGS.jpg"
width="100%"></div>
==yolov5x== <div class="flex-container"><img src="https://i.imgur.com/7va0V8S.jpg"
width="100%"></div>
==統整Yolov5權重比較==<div class="flex-container"><img src="https://i.imgur.com/RlOHVgT.jpg"
width="100%"></div>
## 參、連接Firebase
### (一)安裝Firebase
1.打開PyCharm後在yolov5-master專案的Termial輸入:
```
pip install firebase
```
2.接著輸入:
```
pip install --upgrade firebase-admin
```
### (二)產生金鑰
1.進到Firebase後找到以下畫面
<div class="flex-container"><img src="https://i.imgur.com/gVy6viQ.png"
width="100%"></div>
2.產生新的私密金鑰
<div class="flex-container"><img src="https://i.imgur.com/cPZk2vj.png"
width="100%"></div>
3.將剛才下載好的金鑰檔案改名為serviceAccount後,放在yolov5專案資料夾下
<div class="flex-container"><img src="https://i.imgur.com/xiq6Uxc.png"
width="100%"></div>
### (三)程式碼
1.複製資料庫網址
<div class="flex-container"><img src="https://i.imgur.com/Ew6lVHo.png"
width="100%"></div>
2.前置作業:
```python=
import firebase_admin
from firebase_admin import db
cred_obj = firebase_admin.credentials.Certificate('serviceAccount.json')
default_app = firebase_admin.initialize_app(cred_obj, {'databaseURL': '資料庫網址'})
```
3.上傳資料:
```python=
ref1 = db.reference("/movement/JennyPose/") #("/目錄名稱/節點1名稱/")
ref1.set({
"times": '2021/10/28 21:54:10',
"movement": 'Standing',
"name": 'Jenny'
}) #"資料標題":'資料'
ref2 = db.reference("/movement/SophiaPose/") #("/目錄名稱/節點2名稱/")
ref2.set({
"times": '2021/10/28 21:54:10',
"movement": 'Sitting',
"name": 'Sophia'
}) #"資料標題":'資料'
```
4.上傳結果(資料成功上傳至firebase裡)
<div class="flex-container"><img src="https://i.imgur.com/D4IdHhY.png"
width="100%"></div>
## 肆、完整程式碼
```python=
"""Run inference with a YOLOv5 model on images, videos, directories, streams
Usage:
$ python path/to/detect.py --source path/to/img.jpg --weights yolov5s.pt --img 640
"""
import argparse
import sys
import time
from pathlib import Path
import cv2
import torch
import torch.backends.cudnn as cudnn
FILE = Path(__file__).absolute()
sys.path.append(FILE.parents[0].as_posix()) # add yolov5/ to path
from models.experimental import attempt_load
from utils.datasets import LoadStreams, LoadImages
from utils.general import check_img_size, check_requirements, check_imshow, colorstr, non_max_suppression, \
apply_classifier, scale_coords, xyxy2xywh, strip_optimizer, set_logging, increment_path, save_one_box
from utils.plots import colors, plot_one_box
from utils.torch_utils import select_device, load_classifier, time_synchronized
import json
import requests
import firebase_admin
from firebase_admin import db
#匯入firebase金鑰
cred_obj = firebase_admin.credentials.Certificate('serviceAccount.json')
default_app = firebase_admin.initialize_app(cred_obj, {'databaseURL': 'https://elderlycare-f9022-default-rtdb.firebaseio.com/'})
#動作辨識一個動作的持續時間
AngelTime = []
SophiaTime = []
JennyTime = []
JimTime = []
TinaTime = []
@torch.no_grad()
def run(weights='yolov5m.pt', # model.pt path(s)
source='data/images', # file/dir/URL/glob, 0 for webcam
imgsz=640, # inference size (pixels)
conf_thres=0.25, # confidence threshold
iou_thres=0.45, # NMS IOU threshold
max_det=1000, # maximum detections per image
device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu
view_img=False, # show results
save_txt=False, # save results to *.txt
save_conf=False, # save confidences in --save-txt labels
save_crop=False, # save cropped prediction boxes
nosave=False, # do not save images/videos
classes=None, # filter by class: --class 0, or --class 0 2 3
agnostic_nms=False, # class-agnostic NMS
augment=False, # augmented inference
update=False, # update all models
project='runs/detect', # save results to project/name
name='exp', # save results to project/name
exist_ok=False, # existing project/name ok, do not increment
line_thickness=3, # bounding box thickness (pixels)
hide_labels=False, # hide labels
hide_conf=False, # hide confidences
half=False, # use FP16 half-precision inference
):
save_img = not nosave and not source.endswith('.txt') # save inference images 是否保存图片
# 判断预测源是否为视频流
webcam = source.isnumeric() or source.endswith('.txt') or source.lower().startswith(
('rtsp://', 'rtmp://', 'http://', 'https://'))
# Directories 获取保存预测路径
save_dir = increment_path(Path(project) / name, exist_ok=exist_ok) # increment run
(save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir
# Initialize
set_logging() # 初始化logging
device = select_device(device) # 获取设备
half &= device.type != 'cpu' # half precision only supported on CUDA 如果设备为gpu且opt.half=True,使用Float16
# Load model
model = attempt_load(weights, map_location=device) # load FP32 model 加载Float32模型,确保用户设定的输入图片分辨率能整除32(如不能则调整为能整除并返回)
stride = int(model.stride.max()) # model stride
imgsz = check_img_size(imgsz, s=stride) # check image size
names = model.module.names if hasattr(model, 'module') else model.names # get class names 获取类别名字
if half:
model.half() # to FP16 设置Float16
# Second-stage classifier 设置第二次分类,默认不使用
classify = False
if classify:
modelc = load_classifier(name='resnet50', n=2) # initialize
modelc.load_state_dict(torch.load('resnet50.pt', map_location=device)['model']).to(device).eval()
# Set Dataloader 通过不同的输入源来设置不同的数据加载方式
vid_path, vid_writer = None, None
if webcam:
# 检查当前环境是否能够正常imshow
view_img = check_imshow()
cudnn.benchmark = True # set True to speed up constant image size inference
dataset = LoadStreams(source, img_size=imgsz, stride=stride)
else:
dataset = LoadImages(source, img_size=imgsz, stride=stride)
# Run inference 进行一次前向推理,测试程序是否正常
if device.type != 'cpu':
model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters())))# run once
t0 = time.time()
'''
path 图片/视频路径
img 进行resize+pad之后的图片
img0 原size图片
cap 当读取图片时为None,读取视频时为视频源
'''
for path, img, im0s, vid_cap in dataset:
img = torch.from_numpy(img).to(device)
img = img.half() if half else img.float() # uint8 to fp16/32 图片也设置为Float16
img /= 255.0 # 0 - 255 to 0.0 - 1.0
# 没有batch_size的话则在最前面添加一个轴
if img.ndimension() == 3:
img = img.unsqueeze(0)
# Inference
'''
前向传播 返回pred的shape是(1, num_boxes, 5+num_class)
h,w为传入网络图片的长和宽,注意dataset在检测时使用了矩形推理,所以这里h不一定等于w
num_boxes = h/32 * w/32 + h/16 * w/16 + h/8 * w/8
pred[..., 0:4]为预测框坐标
预测框坐标为xywh(中心点+宽长)格式
pred[..., 4]为objectness置信度
pred[..., 5:-1]为分类结果
'''
t1 = time_synchronized()
pred = model(img, augment=augment)[0]
# Apply NMS
'''
pred:前向传播的输出
conf_thres:置信度阈值
iou_thres:iou阈值
classes:是否只保留特定的类别
agnostic_nms:进行nms是否也去除不同类别之间的框
max-det:保留的最大检测框数量
经过nms之后,预测框格式:xywh-->xyxy(左上角右下角)
pred是一个列表list[torch.tensor],长度为batch_size
每一个torch.tensor的shape为(num_boxes, 6),内容为box+conf+cls
'''
pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)
t2 = time_synchronized()
# Apply Classifier 添加二次分类,默认不使用
if classify:
pred = apply_classifier(pred, modelc, img, im0s)
# Process detections 对每一张图片作处理
# i代表enumerate作出來的前綴編號,det是pred裡面本身的內容
for i, det in enumerate(pred): # detections per image
# 如果输入源是webcam,则batch_size不为1,取出dataset中的一张图片
if webcam: # batch_size >= 1
p, s, im0, frame = path[i], f'{i}: ', im0s[i].copy(), dataset.count
else:
p, s, im0, frame = path, '', im0s.copy(), getattr(dataset, 'frame', 0)
p = Path(p) # to Path
save_path = str(save_dir / p.name) # img.jpg 设置保存图片/视频的路径
txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # img.txt 设置保存框坐标txt文件的路径
s += '%gx%g ' % img.shape[2:] # print string 设置打印信息(图片长宽)
gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh
imc = im0.copy() if save_crop else im0 # for save_crop
sent = ['non', 'non', 'non', 'non', 'non']
pointName = [] #所有辨識到的臉部的座標,按照x,y,x,y順序排列
pointMovement = [] #所有辨識到的動作的座標,按照x,y,x,y順序排列
postures = [] #所有辨識到的臉部的名稱
people = [] #所有辨識到的動作的名稱
if len(det):
# Rescale boxes from img_size to im0 size
#调整预测框的坐标:基于resize+pad的图片的坐标-->基于原size图片的坐标,此时坐标格式为xyxy
det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()
# Print results 打印检测到的类别数量
for c in det[:, -1].unique(): #-1代表最後一個/行
n = (det[:, -1] == c).sum() # detections per class
s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
'''
n為同一個標籤被辨識出的數量
動作存入postures陣列
臉部存入people陣列
'''
if (n==1):
if (names[int(c)]) == "Sitting":
postures.append("Sitting")
elif (names[int(c)]) == "Standing":
postures.append("Standing")
elif (names[int(c)]) == "Lying Down":
postures.append("Lying Down")
elif (names[int(c)]) == "Angel":
people.append("Angel")
elif (names[int(c)]) == "Sophia":
people.append("Sophia")
elif (names[int(c)]) == "Jenny":
people.append("Jenny")
elif (names[int(c)]) == "Jim":
people.append("Jim")
elif (names[int(c)]) == "Tina":
people.append("Tina")
else:
posture = []
person = []
if (names[int(c)]) == "Sitting":
posture.append("Sitting")
postures = posture * n
elif (names[int(c)]) == "Standing":
posture.append("Standing")
postures=posture*n
elif (names[int(c)]) == "Lying Down":
posture.append("Lying Down")
postures = posture * n
elif (names[int(c)]) == "Angel":
person.append("Angel")
people = person * n
elif (names[int(c)]) == "Sophia":
person.append("Sophia")
people = person * n
elif (names[int(c)]) == "Jenny":
person.append("Jenny")
people = person * n
elif (names[int(c)]) == "Jim":
person.append("Jim")
people = person * n
elif (names[int(c)]) == "Tina":
person.append("Tina")
people = person * n
# Write results 保存预测结果
for *xyxy, conf, cls in reversed(det):
if save_txt: # Write to file
#将xyxy(左上角+右下角)格式转为xywh(中心点+宽长)格式,并除上w,h做归一化,转化为列表再保存
xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh 1,4
line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format
with open(txt_path + '.txt', 'a') as f:
f.write(('%g ' * len(line)).rstrip() % line + '\n')
if save_img or save_crop or view_img: # Add bbox to image 在原图上画框
c = int(cls) # integer class
label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}')
plot_one_box(xyxy, im0, label=label, color=colors(c, True), line_thickness=line_thickness)
if save_crop:
save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True) #保存crop
centerWH = (xyxy2xywh(torch.tensor(xyxy).view(1, 4))).view(-1).tolist() #[中心點x座標,中心點y座標,寬,長]
point = [centerWH[0], centerWH[1]] #[中心點x座標,中心點y座標]
'''
動作座標存入pointMovement陣列
臉部座標存入pointName陣列
'''
if (names[int(c)]) == "Angel":
pointName.append(point[0])
pointName.append(point[1])
elif (names[int(c)]) == "Sophia":
pointName.append(point[0])
pointName.append(point[1])
elif (names[int(c)]) == "Jenny":
pointName.append(point[0])
pointName.append(point[1])
elif (names[int(c)]) == "Jim":
pointName.append(point[0])
pointName.append(point[1])
elif (names[int(c)]) == "Tina":
pointName.append(point[0])
pointName.append(point[1])
elif (names[int(c)]) == "Sitting":
pointMovement.append(point[0])
pointMovement.append(point[1])
elif (names[int(c)]) == "Standing":
pointMovement.append(point[0])
pointMovement.append(point[1])
elif (names[int(c)]) == "Lying Down":
pointMovement.append(point[0])
pointMovement.append(point[1])
pNX=pointName[0::2] #提出陣列pointName[0][2][4][6][8]位的值(辨識到的所有人臉的x座標)
pNY=pointName[1::2] #提出陣列pointName[1][3][5][7][9]位的值(辨識到的所有人臉的y座標)
pMX=pointMovement[0::2] #提出陣列pointMovement[0][2][4][6][8]位的值(辨識到的所有動作的x座標)
pMY=pointMovement[1::2] #提出陣列pointMovement[1][3][5][7][9]位的值(辨識到的所有動作的y座標)
nowTime = int(time.time()) # 取得現在時間
struct_time = time.localtime(nowTime) # 轉換成時間元組
timeString = time.strftime('%Y/%m/%d %H:%M:%S', struct_time) # 將時間元組轉換成想要的字串
# 計算時間
def timeCount():
if people[0] == "Angel":
if postures[0] == "Sitting":
if (not AngelTime) or (AngelTime[0]== "Sitting"):
AngelTime.append("Sitting")
AngelTime.append(nowTime)
AngelTime_c = AngelTime[-1] - AngelTime[1]
print(f'Angel Sitting Time {AngelTime_c} s')
if (AngelTime[0] == "Standing") or (AngelTime[0] == "Lying Down"):
AngelTime.clear()
AngelTime.append("Sitting")
AngelTime.append(nowTime)
AngelTime_c = AngelTime[-1] - AngelTime[1]
print(f'Angel Sitting Time {AngelTime_c} s')
if postures[0] == "Standing":
if (not AngelTime) or (AngelTime[0] == "Standing"):
AngelTime.append("Standing")
AngelTime.append(nowTime)
AngelTime_c = AngelTime[-1] - AngelTime[1]
print(f'Angel Standing Time {AngelTime_c} s')
if (AngelTime[0] == "Sitting") or (AngelTime[0] == "Lying Down"):
AngelTime.clear()
AngelTime.append("Standing")
AngelTime.append(nowTime)
AngelTime_c = AngelTime[-1] - AngelTime[1]
print(f'Angel Standing Time {AngelTime_c} s')
if postures[0] == "Lying Down":
if (not AngelTime) or (AngelTime[0] == "Lying Down"):
AngelTime.append("Lying Down")
AngelTime.append(nowTime)
AngelTime_c = AngelTime[-1] - AngelTime[1]
print(f'Angel Lying Down Time {AngelTime_c} s')
if (AngelTime[0] == "Standing") or (AngelTime[0] == "Sitting"):
AngelTime.clear()
AngelTime.append("Lying Down")
AngelTime.append(nowTime)
AngelTime_c = AngelTime[-1] - AngelTime[1]
print(f'Angel Lying Down Time {AngelTime_c} s')
if people[0] == "Sophia":
if postures[0] == "Sitting":
if (not SophiaTime) or (SophiaTime[0]== "Sitting"): #判斷陣列為空,或是陣列第一筆為Sitting
SophiaTime.append("Sitting") #存入時間前先存入一個text,用來做判斷是否有換動作
SophiaTime.append(nowTime) #將時間存入陣列
SophiaTime_c = SophiaTime[-1] - SophiaTime[1] #最後一個時間 - 第一個時間
print(f'Sophia Sitting Time {SophiaTime_c} s')
if (SophiaTime_c > 2):
print("fcm start")
fcm() #測試傳fcm
if (SophiaTime[0] == "Standing") or (SophiaTime[0] == "Lying Down"): #假如陣列第一筆不是Sitting
SophiaTime.clear() #清空陣列
SophiaTime.append("Sitting")
SophiaTime.append(nowTime)
SophiaTime_c = SophiaTime[-1] - SophiaTime[1]
print(f'Sophia Sitting Time {SophiaTime_c} s')
if (SophiaTime_c > 2):
print("fcm start")
fcm()
if postures[0] == "Standing":
if (not SophiaTime) or (SophiaTime[0] == "Standing"):
SophiaTime.append("Standing")
SophiaTime.append(nowTime)
SophiaTime_c = SophiaTime[-1] - SophiaTime[1]
print(f'Sophia Standing Time {SophiaTime_c} s')
if (SophiaTime[0] == "Sitting") or (SophiaTime[0] == "Lying Down"):
SophiaTime.clear()
SophiaTime.append("Standing")
SophiaTime.append(nowTime)
SophiaTime_c = SophiaTime[-1] - SophiaTime[1]
print(f'Sophia Standing Time {SophiaTime_c} s')
if postures[0] == "Lying Down":
if (not SophiaTime) or (SophiaTime[0] == "Lying Down"):
SophiaTime.append("Lying Down")
SophiaTime.append(nowTime)
SophiaTime_c = SophiaTime[-1] - SophiaTime[1]
print(f'Sophia Lying Down Time {SophiaTime_c} s')
if (SophiaTime[0] == "Standing") or (SophiaTime[0] == "Sitting"):
SophiaTime.clear()
SophiaTime.append("Lying Down")
SophiaTime.append(nowTime)
SophiaTime_c = SophiaTime[-1] - SophiaTime[1]
print(f'Sophia Lying Down Time {SophiaTime_c} s')
if people[0] == "Jenny":
if postures[0] == "Sitting":
if (not JennyTime) or (JennyTime[0]== "Sitting"):
JennyTime.append("Sitting")
JennyTime.append(nowTime)
JennyTime_c = JennyTime[-1] - JennyTime[1]
print(f'Jenny Sitting Time {JennyTime_c} s')
if (JennyTime[0] == "Standing") or (JennyTime[0] == "Lying Down"):
JennyTime.clear()
JennyTime.append("Sitting")
JennyTime.append(nowTime)
JennyTime_c = JennyTime[-1] - JennyTime[1]
print(f'Jenny Sitting Time {JennyTime_c} s')
if postures[0] == "Standing":
if (not JennyTime) or (JennyTime[0] == "Standing"):
JennyTime.append("Standing")
JennyTime.append(nowTime)
JennyTime_c = JennyTime[-1] - JennyTime[1]
print(f'Jenny Standing Time {JennyTime_c} s')
if (JennyTime[0] == "Sitting") or (JennyTime[0] == "Lying Down"):
JennyTime.clear()
JennyTime.append("Standing")
JennyTime.append(nowTime)
JennyTime_c = JennyTime[-1] - JennyTime[1]
print(f'Jenny Standing Time {JennyTime_c} s')
if postures[0] == "Lying Down":
if (not JennyTime) or (JennyTime[0] == "Lying Down"):
JennyTime.append("Lying Down")
JennyTime.append(nowTime)
JennyTime_c = JennyTime[-1] - JennyTime[1]
print(f'Jenny Lying Down Time {JennyTime_c} s')
if (JennyTime[0] == "Standing") or (JennyTime[0] == "Sitting"):
JennyTime.clear()
JennyTime.append("Lying Down")
JennyTime.append(nowTime)
JennyTime_c = JennyTime[-1] - JennyTime[1]
print(f'Jenny Lying Down Time {JennyTime_c} s')
if people[0] == "Jim":
if postures[0] == "Sitting":
if (not JimTime) or (JimTime[0]== "Sitting"):
JimTime.append("Sitting")
JimTime.append(nowTime)
JimTime_c = JimTime[-1] - JimTime[1]
print(f'Jim Sitting Time {JimTime_c} s')
if (JimTime[0] == "Standing") or (JimTime[0] == "Lying Down"):
JimTime.clear()
JimTime.append("Sitting")
JimTime.append(nowTime)
JimTime_c = JimTime[-1] - JimTime[1]
print(f'Jim Sitting Time {JimTime_c} s')
if postures[0] == "Standing":
if (not JimTime) or (JimTime[0] == "Standing"):
JimTime.append("Standing")
JimTime.append(nowTime)
JimTime_c = JimTime[-1] - JimTime[1]
print(f'Jim Standing Time {JimTime_c} s')
if (JimTime[0] == "Sitting") or (JimTime[0] == "Lying Down"):
JimTime.clear()
JimTime.append("Standing")
JimTime.append(nowTime)
JimTime_c = JimTime[-1] - JimTime[1]
print(f'Jim Standing Time {JimTime_c} s')
if postures[0] == "Lying Down":
if (not JimTime) or (JimTime[0] == "Lying Down"):
JimTime.append("Lying Down")
JimTime.append(nowTime)
JimTime_c = JimTime[-1] - JimTime[1]
print(f'Jim Lying Down Time {JimTime_c} s')
if (JimTime[0] == "Standing") or (JimTime[0] == "Sitting"):
JimTime.clear()
JimTime.append("Lying Down")
JimTime.append(nowTime)
JimTime_c = JimTime[-1] - JimTime[1]
print(f'Jim Lying Down Time {JimTime_c} s')
if people[0] == "Tina":
if postures[0] == "Sitting":
if (not TinaTime) or (TinaTime[0]== "Sitting"):
TinaTime.append("Sitting")
TinaTime.append(nowTime)
TinaTime_c = TinaTime[-1] - TinaTime[1]
print(f'Tina Sitting Time {TinaTime_c} s')
if (TinaTime[0] == "Standing") or (TinaTime[0] == "Lying Down"):
TinaTime.clear()
TinaTime.append("Sitting")
TinaTime.append(nowTime)
TinaTime_c = TinaTime[-1] - TinaTime[1]
print(f'Tina Sitting Time {TinaTime_c} s')
if postures[0] == "Standing":
if (not TinaTime) or (TinaTime[0] == "Standing"):
TinaTime.append("Standing")
TinaTime.append(nowTime)
TinaTime_c = TinaTime[-1] - TinaTime[1]
print(f'Tina Standing Time {TinaTime_c} s')
if (TinaTime[0] == "Sitting") or (TinaTime[0] == "Lying Down"):
TinaTime.clear()
TinaTime.append("Standing")
TinaTime.append(nowTime)
TinaTime_c = TinaTime[-1] - TinaTime[1]
print(f'Tina Standing Time {TinaTime_c} s')
if postures[0] == "Lying Down":
if (not TinaTime) or (TinaTime[0] == "Lying Down"):
TinaTime.append("Lying Down")
TinaTime.append(nowTime)
TinaTime_c = TinaTime[-1] - TinaTime[1]
print(f'Tina Lying Down Time {TinaTime_c} s')
if (TinaTime[0] == "Standing") or (TinaTime[0] == "Sitting"):
TinaTime.clear()
TinaTime.append("Lying Down")
TinaTime.append(nowTime)
TinaTime_c = TinaTime[-1] - TinaTime[1]
print(f'Tina Lying Down Time {TinaTime_c} s')
def dataUpdateFirebase():
timeCount() #呼叫計算時間的方法,同時上傳資料同時計算時間
data1 = {'Object': ([people[0]], [postures[0]])}
data = {'name': people[0], 'movement': postures[0], 'times': timeString}
if sent[0] == 'non':
if data1['Object'] == (['Angel'], ['Sitting']):
sent[0] = 'Sitting'
print(data)
ref0 = db.reference("/movement/E001AngelPose/") #資料上傳firebase要存入的節點
ref0.set({ #要上傳的資料
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E001"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0]) #將人臉x座標、y座標,動作x座標、y座標,人臉陣列、動作陣列 的第一筆資料刪除
elif data1['Object'] == (['Angel'], ['Standing']):
sent[0] = 'Standing'
print(data)
ref0 = db.reference("/movement/E001AngelPose/")
ref0.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E001"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Angel'], ['Lying Down']):
sent[0] = 'Lying Down'
print(data)
ref0 = db.reference("/movement/E001AngelPose/")
ref0.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E001"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif sent[0] != 'non':
if sent[0] == 'Sitting':
if data1['Object'] == (['Angel'], ['Standing']):
sent[0] = 'Standing'
print(data)
ref0 = db.reference("/movement/E001AngelPose/")
ref0.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E001"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Angel'], ['Lying Down']):
sent[0] = 'Lying Down'
print(data)
ref0 = db.reference("/movement/E001AngelPose/")
ref0.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E001"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif sent[0] == 'Standing':
if data1['Object'] == (['Angel'], ['Sitting']):
sent[0] = 'Sitting'
print(data)
ref0 = db.reference("/movement/E001AngelPose/")
ref0.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E001"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Angel'], ['Lying Down']):
sent[0] = 'Lying Down'
print(data)
ref0 = db.reference("/movement/E001AngelPose/")
ref0.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E001"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif sent[0] == 'Lying Down':
if data1['Object'] == (['Angel'], ['Sitting']):
sent[0] = 'Sitting'
print(data)
ref0 = db.reference("/movement/E001AngelPose/")
ref0.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E001"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Angel'], ['Standing']):
sent[0] = 'Standing'
print(data)
ref0 = db.reference("/movement/E001AngelPose/")
ref0.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E001"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
if sent[1] == 'non':
if data1['Object'] == (['Sophia'], ['Sitting']):
sent[1] = 'Sitting'
print(data)
ref1 = db.reference("/movement/E002SophiaPose/")
ref1.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E002"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Sophia'], ['Standing']):
sent[1] = 'Standing'
print(data)
ref1 = db.reference("/movement/E002SophiaPose/")
ref1.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E002"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Sophia'], ['Lying Down']):
sent[1] = 'Lying Down'
print(data)
ref1 = db.reference("/movement/E002SophiaPose/")
ref1.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E002"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif sent[1] != 'non':
if sent[1] == 'Sitting':
if data1['Object'] == (['Sophia'], ['Standing']):
sent[1] = 'Standing'
print(data)
ref1 = db.reference("/movement/E002SophiaPose/")
ref1.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E002"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Sophia'], ['Lying Down']):
sent[1] = 'Lying Down'
print(data)
ref1 = db.reference("/movement/E002SophiaPose/")
ref1.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E002"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif sent[1] == 'Standing':
if data1['Object'] == (['Sophia'], ['Sitting']):
sent[1] = 'Sitting'
print(data)
ref1 = db.reference("/movement/E002SophiaPose/")
ref1.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E002"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Sophia'], ['Lying Down']):
sent[1] = 'Lying Down'
print(data)
ref1 = db.reference("/movement/E002SophiaPose/")
ref1.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E002"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif sent[1] == 'Lying Down':
if data1['Object'] == (['Sophia'], ['Sitting']):
sent[1] = 'Sitting'
print(data)
ref1 = db.reference("/movement/E002SophiaPose/")
ref1.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E002"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Sophia'], ['Standing']):
sent[1] = 'Standing'
print(data)
ref1 = db.reference("/movement/E002SophiaPose/")
ref1.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E002"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
if sent[2] == 'non':
if data1['Object'] == (['Jenny'], ['Sitting']):
sent[2] = 'Sitting'
print(data)
ref2 = db.reference("/movement/E003JennyPose/")
ref2.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E003"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Jenny'], ['Standing']):
sent[2] = 'Standing'
print(data)
ref2 = db.reference("/movement/E003JennyPose/")
ref2.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E003"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Jenny'], ['Lying Down']):
sent[2] = 'Lying Down'
print(data)
ref2 = db.reference("/movement/E003JennyPose/")
ref2.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E003"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif sent[2] != 'non':
if sent[2] == 'Sitting':
if data1['Object'] == (['Jenny'], ['Standing']):
sent[2] = 'Standing'
print(data)
ref2 = db.reference("/movement/E003JennyPose/")
ref2.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E003"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Jenny'], ['Lying Down']):
sent[2] = 'Lying Down'
print(data)
ref2 = db.reference("/movement/E003JennyPose/")
ref2.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E003"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif sent[2] == 'Standing':
if data1['Object'] == (['Jenny'], ['Sitting']):
sent[2] = 'Sitting'
print(data)
ref2 = db.reference("/movement/E003JennyPose/")
ref2.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E003"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Jenny'], ['Lying Down']):
sent[2] = 'Lying Down'
print(data)
ref2 = db.reference("/movement/E003JennyPose/")
ref2.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E003"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif sent[2] == 'Lying Down':
if data1['Object'] == (['Jenny'], ['Sitting']):
sent[2] = 'Sitting'
print(data)
ref2 = db.reference("/movement/E003JennyPose/")
ref2.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E003"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Jenny'], ['Standing']):
sent[2] = 'Standing'
print(data)
ref2 = db.reference("/movement/E003JennyPose/")
ref2.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E003"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
if sent[3] == 'non':
if data1['Object'] == (['Jim'], ['Sitting']):
sent[3] = 'Sitting'
print(data)
ref3 = db.reference("/movement/E004JimPose/")
ref3.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E004"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Jim'], ['Standing']):
sent[3] = 'Standing'
print(data)
ref3 = db.reference("/movement/E004JimPose/")
ref3.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E004"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Jim'], ['Lying Down']):
sent[3] = 'Lying Down'
print(data)
ref3 = db.reference("/movement/E004JimPose/")
ref3.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E004"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif sent[3] != 'non':
if sent[3] == 'Sitting':
if data1['Object'] == (['Jim'], ['Standing']):
sent[3] = 'Standing'
print(data)
ref3 = db.reference("/movement/E004JimPose/")
ref3.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E004"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Jim'], ['Lying Down']):
sent[3] = 'Lying Down'
print(data)
ref3 = db.reference("/movement/E004JimPose/")
ref3.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E004"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif sent[3] == 'Standing':
if data1['Object'] == (['Jim'], ['Sitting']):
sent[3] = 'Sitting'
print(data)
ref3 = db.reference("/movement/E004JimPose/")
ref3.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E004"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Jim'], ['Lying Down']):
sent[3] = 'Lying Down'
print(data)
ref3 = db.reference("/movement/E004JimPose/")
ref3.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E004"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif sent[3] == 'Lying Down':
if data1['Object'] == (['Jim'], ['Sitting']):
sent[3] = 'Sitting'
print(data)
ref3 = db.reference("/movement/E004JimPose/")
ref3.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E004"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Jim'], ['Standing']):
sent[3] = 'Standing'
print(data)
ref3 = db.reference("/movement/E004JimPose/")
ref3.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E004"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
if sent[4] == 'non':
if data1['Object'] == (['Tina'], ['Sitting']):
sent[4] = 'Sitting'
print(data)
ref4 = db.reference("/movement/E005TinaPose/")
ref4.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E005"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Tina'], ['Standing']):
sent[4] = 'Standing'
print(data)
ref4 = db.reference("/movement/E005TinaPose/")
ref4.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E005"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Tina'], ['Lying Down']):
sent[4] = 'Lying Down'
print(data)
ref4 = db.reference("/movement/E005TinaPose/")
ref4.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E005"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif sent[4] != 'non':
if sent[4] == 'Sitting':
if data1['Object'] == (['Tina'], ['Standing']):
sent[4] = 'Standing'
print(data)
ref4 = db.reference("/movement/E005TinaPose/")
ref4.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E005"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Tina'], ['Lying Down']):
sent[4] = 'Lying Down'
print(data)
ref4 = db.reference("/movement/E005TinaPose/")
ref4.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E005"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif sent[4] == 'Standing':
if data1['Object'] == (['Tina'], ['Sitting']):
sent[4] = 'Sitting'
print(data)
ref4 = db.reference("/movement/E005TinaPose/")
ref4.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E005"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Tina'], ['Lying Down']):
sent[4] = 'Lying Down'
print(data)
ref4 = db.reference("/movement/E005TinaPose/")
ref4.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E005"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif sent[4] == 'Lying Down':
if data1['Object'] == (['Tina'], ['Sitting']):
sent[4] = 'Sitting'
print(data)
ref4 = db.reference("/movement/E005TinaPose/")
ref4.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E005"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
elif data1['Object'] == (['Tina'], ['Standing']):
sent[4] = 'Standing'
print(data)
ref4 = db.reference("/movement/E005TinaPose/")
ref4.set({
"times": timeString,
"movement": postures[0],
"name": people[0],
"ID": "E005"
})
print("已上傳")
del (pNX[0], pNY[0], pMX[0], pMY[0], people[0], postures[0])
j = len(people) #people陣列的長度
for i in range(j):
if (len(postures) >= 1)&(len(people) >= 1):
lenX = abs(pNX[0] - pMX[0]) #取絕對值(人臉x座標 - 動作x座標)
lenY = abs(pNY[0] - pMY[0]) #取絕對值(人臉y座標 - 動作y座標)
if (lenX >= 0.0) & (lenX <= 200.0) & (lenY >= 0.0) & (lenY <= 2000.0):
dataUpdateFirebase() #呼叫將資料上傳firebase的方法
#傳fcm
def SendMessageToLineNotify(Message, MessagingSenderId, Token):
headers = {"content-type": "application/json","Authorization": "key=" + MessagingSenderId}
url = "https://fcm.googleapis.com/fcm/send"
data = {"notification": {"title" : "警訊!",
"body" : Message},
"to" : Token}
req = requests.post(url, data=json.dumps(data), headers=headers)
print (req.text)
def fcm():
Message = "躺著超過九小時!"
MessagingSenderId = "1:326497554126:android:d2f4d30ad71005a8630a9f" #應用程式ID
#MessagingSenderId = "326497554126" #寄件者ID
#fcm 伺服器金鑰
#MessagingSenderId = "AAAATATFTs4:APA91bFztQHHI9sEOg4QY_jmQo9d4DjkZtQcg4u-_USb3Yc62eWojQmHEejyPd3X0jseWgaINEsZnOH6B24Q3DmhC01qtsWcsgIIawqKSy25gNwWhY82iprYGfu0mzzFGsmElw3lOyk4"
# fcm 伺服器金鑰
Token = "AAAATATFTs4:APA91bFztQHHI9sEOg4QY_jmQo9d4DjkZtQcg4u-_USb3Yc62eWojQmHEejyPd3X0jseWgaINEsZnOH6B24Q3DmhC01qtsWcsgIIawqKSy25gNwWhY82iprYGfu0mzzFGsmElw3lOyk4"
#Token = "1:326497554126:android:d2f4d30ad71005a8630a9f" #應用程式ID
#Token = "FHf7Bi5mHgPqEMeZULqF6KReW7p2" #使用者UID
SendMessageToLineNotify(Message, MessagingSenderId, Token)
# Print time (inference + NMS) 打印前向传播+nms时间
print(f'{s}Done. ({t2 - t1:.3f}s)')
# Stream results 如果设置展示,则show图片/视频
if view_img:
cv2.imshow(str(p), im0)
cv2.waitKey(1) # 1 millisecond
# Save results (image with detections) 设置保存图片/视频
if save_img:
if dataset.mode == 'image':
cv2.imwrite(save_path, im0)
else: # 'video' or 'stream'
if vid_path != save_path: # new video
vid_path = save_path
if isinstance(vid_writer, cv2.VideoWriter):
vid_writer.release() # release previous video writer
if vid_cap: # video
fps = vid_cap.get(cv2.CAP_PROP_FPS)
w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
else: # stream
fps, w, h = 30, im0.shape[1], im0.shape[0]
save_path += '.mp4'
vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h))
vid_writer.write(im0)
#显示保存信息
if save_txt or save_img:
s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
print(f"Results saved to {save_dir}{s}")
#strip_optimizer函数将pt文件中除了模型model或者ema之外的所有东西去除
if update:
strip_optimizer(weights) # update model (to fix SourceChangeWarning)
print(f'Done. ({time.time() - t0:.3f}s)')
def parse_opt():
parser = argparse.ArgumentParser()
parser.add_argument('--weights', nargs='+', type=str, default='bestWeights/best3.pt', help='model.pt path(s)') # 選用訓練的權重,可用根目錄下的yolov5s.pt,也可用runs/train/exp/weights/best.pt
#parser.add_argument('--weights', nargs='+', type=str, default=['bestAct.pt','bestAct2.pt'], help='model.pt path(s)') # 選用兩個訓練的權重
parser.add_argument('--source', type=str, default='data/ElderlyCareTest', help='file/dir/URL/glob, 0 for webcam') # 檢測資料,可以是圖片/視訊路徑,也可以是'0'(電腦自帶攝像頭),也可以是rtsp等視訊流
#parser.add_argument('--source', type=str, default='0', help='file/dir/URL/glob, 0 for webcam')#連接webcam
parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='inference size (pixels)') # 網路輸入圖片大小
parser.add_argument('--conf-thres', type=float, default=0.25, help='confidence threshold') # 置信度閾值,檢測到的物件屬於特定類(狗,貓,香蕉,汽車等)的概率
parser.add_argument('--iou-thres', type=float, default=0.45, help='NMS IoU threshold') # 做nms的iou閾值
parser.add_argument('--max-det', type=int, default=1280, help='maximum detections per image') #保留的最大检测框数量
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') # 檢測的裝置,cpu;0(表示一個gpu裝置cuda:0);0,1,2,3(多個gpu裝置)。值為空時,訓練時預設使用計算機自帶的顯示卡或CPU
parser.add_argument('--view-img', action='store_true', help='show results') # 是否展示檢測之後的圖片/視訊,預設False
parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') # 是否將檢測的框座標以txt檔案形式儲存,預設False
parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels') # 是否將檢測的labels以txt檔案形式儲存,預設False
parser.add_argument('--save-crop', action='store_true', help='save cropped prediction boxes') #是否保存裁剪预测框图片
parser.add_argument('--nosave', action='store_true', help='do not save images/videos') #不保存图片、视频
parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3') # 設定只保留某一部分類別,如0或者0 2 3
parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS') # 進行nms是否也去除不同類別之間的框,預設False
parser.add_argument('--augment', action='store_true', help='augmented inference') # 推理的時候進行多尺度,翻轉等操作(TTA)推理
parser.add_argument('--update', action='store_true', help='update all models') # 如果為True,則對所有模型進行strip_optimizer操作,去除pt檔案中的優化器等資訊,預設為False
parser.add_argument('--project', default='runs/ElderlyCare', help='save results to project/name') # 檢測結果所存放的路徑,預設為runs/detect
parser.add_argument('--name', default='expElderlyCare', help='save results to project/name') # 檢測結果所在資料夾的名稱,預設為exp
parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') # 若現有的project/name存在,則不進行遞增
parser.add_argument('--line-thickness', default=3, type=int, help='bounding box thickness (pixels)') #画框的线条粗细
parser.add_argument('--hide-labels', default=False, action='store_true', help='hide labels') #可视化时隐藏预测类别
parser.add_argument('--hide-conf', default=False, action='store_true', help='hide confidences') #可视化时隐藏置信度
parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference') #是否使用F16精度推理
opt = parser.parse_args()
return opt
def main(opt):
print(colorstr('ElderlyCare: ') + ', '.join(f'{k}={v}' for k, v in vars(opt).items()))
# 检查环境
check_requirements(exclude=('tensorboard', 'thop'))
run(**vars(opt))
if __name__ == "__main__":
opt = parse_opt()
main(opt)
```
## 參考資料
(環境安裝建置)
https://blog.csdn.net/qq_44697805/article/details/107702939
https://tw511.com/a/01/13004.html
https://www.uj5u.com/houduan/277337.html
(YoloV4環境建置,用來參考Cuda、Cudnn安裝方式)
https://www.youtube.com/watch?v=PVf16gIhnek
(Cuda GPU支援狀況,數字越大,運算越快,原則上3以上就沒問題了)
https://en.wikipedia.org/wiki/CUDA
(訓練自製數據參考資料)
https://tw511.com/a/01/29504.html
https://blog.csdn.net/g11d111/article/details/108872076
https://www.uj5u.com/houduan/277337.html
(利用pycocotools的whl檔安裝方式)
https://iter01.com/561896.html
(apex安裝方式)
https://blog.csdn.net/mrjkzhangma/article/details/100704397
(精靈標記助手(Colabeler)使用)
https://blog.csdn.net/youmumzcs/article/details/79657132
(YoloV5簡介)
https://zhuanlan.zhihu.com/p/172121380
(Pycharm連接Firebase參考資料)
https://medium.com/%E7%A8%8B%E5%BC%8F%E8%A3%A1%E6%9C%89%E8%9F%B2/python-%E7%88%AC%E8%9F%B2%E4%B8%8D%E6%B1%82%E4%BA%BA%E4%B9%8B%E8%B3%87%E6%96%99%E5%84%B2%E5%AD%98%E7%AF%87-9bc0146e56f1
https://kk665403.pixnet.net/blog/post/403982819-%5Bpython%5D-python-firebase%E8%B3%87%E6%96%99%E5%BA%AB%E4%B8%B2%E6%8E%A5%E6%93%8D%E4%BD%9C%E7%B0%A1%E6%98%93%E6%96%B9%E6%B3%95
https://kk665403.pixnet.net/blog/post/403982819-%5Bpython%5D-python-firebase%E8%B3%87%E6%96%99%E5%BA%AB%E4%B8%B2%E6%8E%A5%E6%93%8D%E4%BD%9C%E7%B0%A1%E6%98%93%E6%96%B9%E6%B3%95