# 【Hung-yi Lee 機器學習 - L3 : Convolutional Neural Network (CNN) +L4 : Self-attention 】 :::info - 參考 [2021 Spring](https://speech.ee.ntu.edu.tw/~hylee/ml/2021-spring.php)、[2022 Spring](https://speech.ee.ntu.edu.tw/~hylee/ml/2022-spring.php)、[2023 Spring](https://speech.ee.ntu.edu.tw/~hylee/ml/2023-spring.php)、[2024 Spring](https://speech.ee.ntu.edu.tw/~hylee/genai/2024-spring.php)、[2025](https://speech.ee.ntu.edu.tw/~hylee/ml/2025-spring.php) - Lecture 3 : 卷積神經網路 Convolutional Neural Network (CNN) - Spatial Transformer Layer - Lecture 4 : 自注意力機制 Self-attention - 循環神經網路 Recurrent Neural Network (RNN) - 圖神經網路 Graph Neural Network (GNN) - HW 3 : Image Classification - HW 4 : Self-Attention ::: <br/> ## Lecture 3 : 卷積神經網路 Convolutional Neural Network (CNN) [【機器學習2021】卷積神經網路 (Convolutional Neural Networks, CNN) ](https://www.youtube.com/watch?v=OP5HcXJg2Aw) CNN 主要用在擷取圖像及局部連續數據的特徵,非常適合圖像、視頻或局部文本特徵提取的任務 Cross Entropy(交叉熵) 是預測分佈(predicted probability distribution)與真實分佈(true probability distribution)之間的差異,如果預測分佈與真實分佈越接近,則交叉熵的值越小,越有可能為該類別 ![截圖 2025-03-23 下午2.27.01](https://hackmd.io/_uploads/HJ8TFXp3ke.png) 彩色的channel為三原色x3、黑白的x1 ![截圖 2025-03-23 下午3.14.50](https://hackmd.io/_uploads/ByBbBVThJx.png) ![截圖 2025-03-23 下午2.30.14](https://hackmd.io/_uploads/Bk9F9Qphkx.png) 模型參數越多,可以增加彈性,同時越有可能overfitting ![截圖 2025-03-23 下午2.31.53](https://hackmd.io/_uploads/rkayjmThye.png) ![截圖 2025-03-23 下午2.35.01](https://hackmd.io/_uploads/B10ajm63kx.png) 因此會設置receptive field,較大的receptive field,允許模型捕捉更大範圍的特徵,適合處理全局性資訊,例如物件的整體形狀。較小的receptive,專注於局部細節 ![截圖 2025-03-23 下午2.38.04](https://hackmd.io/_uploads/Byna3mpnyl.png) kernal size 通常不會設太大,stride為重疊,通常會設置1 or 2 ![截圖 2025-03-23 下午2.50.58](https://hackmd.io/_uploads/B1fw1EThJg.png) 參數共享(Parameter Sharing),同一個卷積核(Filter)會應用到不同位置,使網路可以學習到可遷移的特徵,如邊緣、紋理等 而這些filter都是3x3xchannel的tensor,現在假設已知 ![截圖 2025-03-23 下午2.59.17](https://hackmd.io/_uploads/H1bP-E62kl.png) 多通道卷積(Multiple Filters),紅、黃、綠、藍原點,代表不同的卷積核,每個卷積核用來檢測不同類型的特徵 ![截圖 2025-03-23 下午2.57.44](https://hackmd.io/_uploads/rk37bEahJl.png) 例如 filter 1,stride=1 ![截圖 2025-03-23 下午3.28.40](https://hackmd.io/_uploads/H1oNON6nke.png) ![截圖 2025-03-23 下午3.29.22](https://hackmd.io/_uploads/B18DdEp2kg.png) 例如 filter 2 ![截圖 2025-03-23 下午3.30.06](https://hackmd.io/_uploads/HJ-c_4621g.png) ![截圖 2025-03-23 下午3.21.16](https://hackmd.io/_uploads/H1zF84T3yx.png) ![截圖 2025-03-23 下午3.33.47](https://hackmd.io/_uploads/H1jDYEa3kx.png) ![截圖 2025-03-23 下午3.05.35](https://hackmd.io/_uploads/S1gAGVahkg.png) Pooling 如果運算資源不夠,簡化圖片,減少運算量 例如2x2保留最大的 ![截圖 2025-03-23 下午3.37.45](https://hackmd.io/_uploads/Sy_Iq46nJe.png) ![截圖 2025-03-23 下午3.38.56](https://hackmd.io/_uploads/B123qEpn1l.png) CNN ![截圖 2025-03-23 下午3.39.42](https://hackmd.io/_uploads/HJkRqN62kg.png) ## Spatial Transformer Layer [Spatial Transformer Layer ](https://www.youtube.com/watch?v=SoCywZ1hZak) 在傳統的卷積網絡中,固定的卷積核只能提取局部特徵,可能在圖片發生旋轉、尺度變化或偏移時效果較差 Spatial Transformer Layer 允許在學習時自動調整圖片或特徵圖的空間變換(例如旋轉、縮放、裁切) ![截圖 2025-03-23 下午4.15.30](https://hackmd.io/_uploads/r1m3XBanJg.png) ![截圖 2025-03-23 下午4.28.42](https://hackmd.io/_uploads/rygdUS621x.png) ![截圖 2025-03-23 下午4.28.46](https://hackmd.io/_uploads/H1Su8rThkg.png) ![截圖 2025-03-23 下午4.29.01](https://hackmd.io/_uploads/S1du8ST31x.png) ![截圖 2025-03-23 下午5.20.07](https://hackmd.io/_uploads/BypUzU62Jl.png) ![截圖 2025-03-23 下午5.20.47](https://hackmd.io/_uploads/ry-tzUT21g.png) <br/> ## Lecture 4 : 自注意力機制 Self-attention [【機器學習2021】自注意力機制 (Self-attention) (上) ](https://www.youtube.com/watch?v=hYdO9CscNes) [【機器學習2021】自注意力機制 (Self-attention) (下) ](https://www.youtube.com/watch?v=gmsMY5kc-zw) 透過CNN提取特徵,可能會遇到一些限制,CNN 只能透過多層堆疊來擴大視野,但難以像 Transformer 一樣一次關注整張圖像、CNN 的卷積核權重有平移不變性(同一張圖像的不同區域使用相同的卷積核),無法根據不同輸入動態調整權重 One-hot 只能夠判斷是否為 cat dog apple,但無法找出cat & dog的關聯 ![截圖 2025-03-30 下午2.14.20](https://hackmd.io/_uploads/SkKzzvLakg.png) Self-Attention 可以補足 CNN 的缺點,動態調整權重,並考慮更遠距離的關聯,進而提升CNN 在圖像分類、目標偵測的表現 >PS 輸入與輸出的三種型態 >![截圖 2025-03-30 下午4.02.18](https://hackmd.io/_uploads/BkeJj_LTkl.png) >![截圖 2025-03-30 下午4.02.48](https://hackmd.io/_uploads/Sy9kjuLaJg.png) >![截圖 2025-03-30 下午4.02.56](https://hackmd.io/_uploads/HyVxsdLT1x.png) Self-attention 會考慮整句,輸入一個vec就輸出一個vec(可以多次) 輸出的vec -> fully connected network -> result ![截圖 2025-03-30 下午4.17.46](https://hackmd.io/_uploads/r1C4R_ITkx.png) (可以多次) ![截圖 2025-03-30 下午4.18.08](https://hackmd.io/_uploads/ryF80OI61g.png) 計算自己與其他的關聯性 α ![截圖 2025-03-30 晚上8.18.54](https://hackmd.io/_uploads/r1An8hU6kl.png) ![截圖 2025-03-30 晚上8.19.26](https://hackmd.io/_uploads/By1kPhI6Je.png) 現在有四組a1 a2 a3 a4 要算出其他組與自己的關聯性 α,attention score α,1,1 α1,2 α1,3 α1,4 再套入Soft-max 輸出新的vec (α'1,1 α'1,2 α'1,3 α'1,4) ![截圖 2025-03-30 晚上8.20.50](https://hackmd.io/_uploads/HycIwhIp1e.png) ![截圖 2025-03-30 晚上8.21.10](https://hackmd.io/_uploads/rklwDn86Je.png) a1 a2 a3 a4,每個向量都乘上一個數值,得到新的向量 V1 V2 V3 V4,再乘上 α',相加得到 b1 ![截圖 2025-03-30 晚上9.04.21](https://hackmd.io/_uploads/SJxKb6Iaye.png) ![截圖 2025-03-31 晚上8.54.06](https://hackmd.io/_uploads/HJWclGOpkx.png) ![截圖 2025-03-31 晚上8.59.40](https://hackmd.io/_uploads/r1rCWGd6kl.png) ![截圖 2025-03-31 晚上9.00.30](https://hackmd.io/_uploads/BJZbzM_6kx.png) ![截圖 2025-03-30 晚上9.21.08](https://hackmd.io/_uploads/H1H8HpI6Jg.png) ![截圖 2025-03-31 晚上9.01.17](https://hackmd.io/_uploads/SyT7MGd6yg.png) Self-attention 應用 ![截圖 2025-04-01 凌晨1.15.36](https://hackmd.io/_uploads/Hy_6aBO61g.png) ![截圖 2025-04-01 凌晨1.16.30](https://hackmd.io/_uploads/SkL8RBupkl.png) <br/> ## 循環神經網路 Recurrent Neural Network (RNN) [ML Lecture 21-1: Recurrent Neural Network (Part I) ](https://www.youtube.com/watch?v=xCGidAeyS4M) [ML Lecture 21-2: Recurrent Neural Network (Part II)](https://www.youtube.com/watch?v=rTqmWlnwz_0) [Speech Recognition PDF](https://speech.ee.ntu.edu.tw/~tlkagk/courses/DLHLP20/ASR%20(v12).pdf) RNN 是一種專門用來處理序列數據(sequence data)的神經網路,和一般的神經網路(如 CNN)不同,RNN 會保留前面步驟的資訊。缺點是,難以捕捉長距離依賴(遠處的資訊會被「淡化」)、無法並行運算(每個步驟都要等前一步計算完) CNN:輸入 → 計算 → 輸出(一次性處理) RNN:輸入 → 記住資訊 → 傳給下一個時間步驟 → 計算 → 輸出(循環處理) >PS 輸入與輸出的三種型態 ![截圖 2025-04-02 下午3.28.25](https://hackmd.io/_uploads/Sk1sPwcpJl.png) >![截圖 2025-04-02 下午3.32.14](https://hackmd.io/_uploads/ByCbdw5aJe.png) >![截圖 2025-04-02 下午3.33.25](https://hackmd.io/_uploads/SkX5_D5pkx.png) >![截圖 2025-04-02 下午4.16.37](https://hackmd.io/_uploads/rJ8uzdqakg.png) >![截圖 2025-04-02 下午4.20.09](https://hackmd.io/_uploads/BJFrX_9pJe.png) >![截圖 2025-04-02 晚上8.20.16](https://hackmd.io/_uploads/HJLL2s96ke.png) >![截圖 2025-04-02 晚上8.33.40](https://hackmd.io/_uploads/rJOnCic6yl.png) >![截圖 2025-04-02 晚上8.35.16](https://hackmd.io/_uploads/BJ0rkn5pkl.png) >![截圖 2025-04-02 晚上11.44.08](https://hackmd.io/_uploads/B1OuoAqaJg.png) >![截圖 2025-04-02 晚上11.44.40](https://hackmd.io/_uploads/ByCOsC9Tkx.png) >![截圖 2025-04-02 晚上11.44.28](https://hackmd.io/_uploads/S1HtoRcp1e.png) RNN 有記憶力後,可以解決input同一個詞彙,但不同output的問題 ![截圖 2025-04-02 上午10.58.03](https://hackmd.io/_uploads/BJKgOmqT1g.png) ![截圖 2025-04-02 上午11.07.40](https://hackmd.io/_uploads/r1Gd9m5Tkx.png) ![截圖 2025-04-02 上午11.08.19](https://hackmd.io/_uploads/Hkj497caJe.png) ![截圖 2025-04-02 上午11.10.38](https://hackmd.io/_uploads/HyyCqQcp1l.png) ![截圖 2025-04-02 上午11.11.35](https://hackmd.io/_uploads/S17Wim9Tyx.png) 因為store的值不同,output也會不同 ![截圖 2025-04-02 上午11.13.39](https://hackmd.io/_uploads/HyH_jX9Tye.png) 也可以存第一個output的結果 ![截圖 2025-04-02 上午11.14.37](https://hackmd.io/_uploads/H1C3smqpyl.png) 也可以訓練反向的 ![截圖 2025-04-02 上午11.17.06](https://hackmd.io/_uploads/rk3HnQ5T1g.png) 前面提到過的難以捕捉長距離依賴,可以用 Long Short-term Memory(LSTM) 普通 RNN 只有一個簡單的循環結構,每次處理一個詞時,會根據上一個時間步的隱藏狀態來更新當前的狀態。但隱藏狀態的資訊會隨著時間傳遞而 逐漸減弱(梯度消失問題),導致模型「忘記」前面發生的重要資訊 LSTM 會透過「記憶單元(Cell State)」來保留重要資訊,即使過了很多個時間步,也能記住遠處的關鍵詞。它的「遺忘門(Forget Gate)」和「輸入門(Input Gate)」能夠選擇性地遺忘不重要的資訊,保留重要的資訊,這樣長距離的關係就能被學習到 ![截圖 2025-04-02 上午11.21.41](https://hackmd.io/_uploads/rJxwTQ5p1x.png) ![截圖 2025-04-02 上午11.32.36](https://hackmd.io/_uploads/ByP1l49ayx.png) Attention-Based RNN 變形模型 為傳統 RNN + Attention,保留RNN的遞歸結構,但額外引入注意力機制(Attention Mechanism),主要用Encoder-Decoder 架構 注意力機制(Attention Mechanism): 注意力機制會根據當前時間步的隱藏狀態(或中間表示),來加權不同的輸入特徵,專注最相關的信息,而非均等地處理所有輸入 Encoder-Decoder架構:Encoder將輸入序列編碼成一組隱藏狀態,Decoder根據這些隱藏狀態生成輸出 通常的做法為:輸入一個序列,通過 RNN(or LSTM)逐步處理每個時間步的輸入 -> Attention 允許 Decoder 在生成每個輸出時,根據 Encoder 的所有隱藏狀態來加權選擇重要的信息,而非單純依賴最終的隱藏狀態(傳統 RNN 只能用最後一個狀態)-> 根據 Attention 計算出的加權輸入,進一步生成輸出序列 ![截圖 2025-04-02 晚上11.56.21](https://hackmd.io/_uploads/ryLNACcakg.png) ![截圖 2025-04-03 凌晨12.35.00](https://hackmd.io/_uploads/HJEW_ksake.png) ![截圖 2025-04-03 凌晨12.35.12](https://hackmd.io/_uploads/r1TWuJjakx.png) ![截圖 2025-04-03 凌晨12.37.39](https://hackmd.io/_uploads/HyFGO1oake.png) ![截圖 2025-04-03 凌晨12.39.40](https://hackmd.io/_uploads/By08_Jo6Je.png) >比較 >![截圖 2025-04-11 凌晨12.36.28](https://hackmd.io/_uploads/rkk6X_HRJe.png) >![截圖 2025-04-11 凌晨12.57.21](https://hackmd.io/_uploads/r1lY__SAJl.png) <br/> ## 圖神經網路 Graph Neural Network (GNN) GNN(Graph Neural Network)是一種專門用來處理圖結構數據(Graph-Structured Data)的神經網路,針對非結構化數據(如社交網路、知識圖譜、推薦系統),學習節點之間的關係 ![截圖 2025-04-03 凌晨12.43.23](https://hackmd.io/_uploads/HyEBtyipye.png) ![截圖 2025-04-03 凌晨12.46.09](https://hackmd.io/_uploads/SyHk91oakx.png) ![截圖 2025-04-03 下午2.30.30](https://hackmd.io/_uploads/HJufoos61l.png) ![截圖 2025-04-03 下午2.31.13](https://hackmd.io/_uploads/Bycjois6kx.png) ![截圖 2025-04-03 下午2.31.35](https://hackmd.io/_uploads/BJM2soiaJl.png) ![截圖 2025-04-03 下午2.32.18](https://hackmd.io/_uploads/HJ4Tjiop1l.png) ![截圖 2025-04-03 下午2.32.47](https://hackmd.io/_uploads/BJupsosTJg.png) ![截圖 2025-04-03 下午2.34.40](https://hackmd.io/_uploads/Hk6M2oj6Jl.png) <br/> ## HW 3 : Image Classification 圖片辨識,使用訓練集的標註資料,對未標註資料進行偽標籤生成(半監督學習),然後將標註資料和偽標籤資料合併,用來訓練模型,並用測試集評估模型的效果 ![截圖 2025-04-03 下午4.08.31](https://hackmd.io/_uploads/BJHNfToTyl.png) ![截圖 2025-04-03 下午4.08.40](https://hackmd.io/_uploads/BJ0Nf6oTJl.png) 下載資料 ```= !gdown --id '1awF7pZ9Dz7X1jn1_QAiKN-_v56veCEKy' --output food-11.zip ``` ```= import numpy as np import torch import torch.nn as nn import torchvision.transforms as transforms from PIL import Image from torch.utils.data import ConcatDataset, DataLoader, Subset from torchvision.datasets import DatasetFolder, VisionDataset from tqdm.auto import tqdm import random from torchvision import transforms from PIL import Image from torch.utils.data import Dataset import random import torchvision.transforms.functional as F ``` ```= # It is important to do data augmentation in training. # However, not every augmentation is useful. # Please think about what kind of augmentation is helpful for food recognition. # 訓練集 # train_tfm = transforms.Compose([ # # 調整影像大小為 128x128 # transforms.Resize((128, 128)), # transforms.RandomHorizontalFlip(), # 隨機水平翻轉 # transforms.RandomRotation(15), # 隨機旋轉 ±15度 # transforms.ColorJitter(brightness=0.2, contrast=0.2), # 隨機調整亮度與對比 # # 轉換為 PyTorch 的 Tensor 格式,將像素值歸一化為 [0, 1] 範圍(原始像素值是 0-255) # transforms.ToTensor(), # ]) from torchvision import transforms from PIL import Image from torch.utils.data import Dataset import random import torchvision.transforms.functional as F # 定義數據增強轉換 train_tfm = transforms.Compose([ transforms.Resize((128, 128)), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), # 標準化 ]) # class AugmentedDataset(Dataset): def __init__(self, dataset, transform=None): self.dataset = dataset self.transform = transform # 接收外部傳入的 transform self.color_jitter = transforms.ColorJitter(brightness=0.2, contrast=0.2) # 初始化亮度 & 對比增強 self.classes = dataset.classes self.class_to_idx = dataset.class_to_idx # 擴展數據集 self.samples = [] for i in range(len(dataset)): image, label = dataset[i] self.samples.append((image, label, None)) # 原圖 self.samples.append((image, label, "flip")) # 水平翻轉 self.samples.append((image, label, "color")) # 亮度 & 對比調整 def __len__(self): return len(self.samples) def __getitem__(self, idx): image, label, aug_type = self.samples[idx] # 確保 image 是 PIL.Image if isinstance(image, torch.Tensor): image = transforms.ToPILImage()(image) # 先應用基本轉換 if self.transform: image = self.transform(image) # 應用增強 if aug_type == "flip": image = F.hflip(image) # 確保是固定翻轉 elif aug_type == "color": image = self.color_jitter(image) # 固定的亮度 & 對比增強 return image, label # 測試集 # test_tfm = transforms.Compose([ # transforms.Resize((128, 128)), # transforms.ToTensor(), # ]) # 驗證集 valid_tfm = transforms.Compose([ transforms.Resize((128, 128)), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), # 標準化 ]) class AugmentedValDataset(Dataset): def __init__(self, dataset, transform=None): self.dataset = dataset self.transform = transform self.color_jitter = transforms.ColorJitter(brightness=0.2, contrast=0.2) # 初始化亮度 & 對比增強 self.classes = dataset.classes self.class_to_idx = dataset.class_to_idx # **擴展數據集** self.samples = [] for i in range(len(dataset)): image, label = dataset[i] self.samples.append((image, label, None)) # 原圖 self.samples.append((image, label, "flip")) # 水平翻轉 self.samples.append((image, label, "color")) # 亮度 & 對比調整 def __len__(self): return len(self.samples) def __getitem__(self, idx): image, label, aug_type = self.samples[idx] # **確保 image 是 PIL.Image** if isinstance(image, torch.Tensor): image = transforms.ToPILImage()(image) # 先應用基本轉換 if self.transform: image = self.transform(image) # 應用增強 if aug_type == "flip": image = F.hflip(image) # 確保是固定翻轉 elif aug_type == "color": image = self.color_jitter(image) # 固定的亮度 & 對比增強 return image, label ``` food-11/training/labeled 有label的訓練資料 food-11/validation 有label的驗證資料 food-11/training/unlabele 沒有label的驗證資料 food-11/testing (x) 有label的資料x3倍,做不同的處理 ```= batch_size = 32 train_set = DatasetFolder("food-11/training/labeled", loader=lambda x: Image.open(x), extensions="jpg", transform=train_tfm) valid_set = DatasetFolder("food-11/validation", loader=lambda x: Image.open(x), extensions="jpg", transform=valid_tfm) unlabeled_set = DatasetFolder("food-11/training/unlabeled", loader=lambda x: Image.open(x), extensions="jpg", transform=train_tfm) # test_set = DatasetFolder("food-11/testing", loader=lambda x: Image.open(x), extensions="jpg", transform=test_tfm) # num_workers cpu核心 # 這裡的test_set 沒標籤不使用,之後會從 train_set 切出 test_set augmented_train_set = AugmentedDataset(train_set, transform=train_tfm) train_loader = DataLoader(augmented_train_set, batch_size=batch_size, shuffle=True, num_workers=1, pin_memory=True) augmented_valid_set = AugmentedValDataset(valid_set, transform=valid_tfm) valid_loader = DataLoader(augmented_valid_set, batch_size=batch_size, shuffle=True, num_workers=1, pin_memory=True) # test_loader = DataLoader(test_set, batch_size=batch_size, shuffle=False) ``` 確認一下數量 ```= # train_set from collections import defaultdict # 初始化一個字典來計算每個類別的數量 class_count = defaultdict(int) # 遍歷 train_set,統計每個類別的數量 for _, label in train_set: class_count[label] += 1 # 印出每個類別的數量 for class_name, count in class_count.items(): print(f"類別 {class_name}: {count} 張圖片") print() # augmented_train_set from collections import defaultdict # 初始化一個字典來計算每個類別的數量 class_count = defaultdict(int) # 遍歷 augmented_train_set,統計每個類別的數量 for i in range(len(augmented_train_set)): _, label, _ = augmented_train_set.samples[i] # 正確解包 3 個變數 class_count[label] += 1 # 印出每個類別的數量 for class_name, count in class_count.items(): print(f"類別 {class_name}: {count} 張圖片") ``` ![截圖 2025-04-03 下午6.57.31](https://hackmd.io/_uploads/Syh49kh6kg.png) ```= # valid_set from collections import defaultdict # 初始化一個字典來計算每個類別的數量 class_count = defaultdict(int) # 遍歷 valid_set,統計每個類別的數量 for _, label in valid_set: class_count[label] += 1 # 印出每個類別的數量 for class_name, count in class_count.items(): print(f"類別 {class_name}: {count} 張圖片") print() # augmented_valid_set from collections import defaultdict # 初始化一個字典來計算每個類別的數量 class_count = defaultdict(int) # 遍歷 augmented_valid_set,統計每個類別的數量 for i in range(len(augmented_valid_set)): _, label, _ = augmented_valid_set.samples[i] # 正確解包 3 個變數 class_count[label] += 1 # 印出每個類別的數量 for cls_name, count in class_count.items(): print(f"類別 {class_name}: {count} 張圖片") ``` ![截圖 2025-04-03 下午6.59.44](https://hackmd.io/_uploads/S1Mr5JhT1l.png) 有label的訓練資料,從每個類別隨機挑選 20 張作為測試資料 ```= import random from torch.utils.data import Subset # 先分類每個類別的索引 class_indices = {label: [] for label in range(len(augmented_train_set.classes))} print(class_indices) # 遍歷 augmented_train_set,把索引按照 label 分類 for idx, (_, label, _) in enumerate(augmented_train_set.samples): # <- 直接從 samples 取 label class_indices[label].append(idx) # 從每個類別隨機挑選 20 張作為 test_set test_indices = [] for label, indices in class_indices.items(): if len(indices) >= 20: test_indices.extend(random.sample(indices, 20)) # 隨機抽 20 張 else: test_indices.extend(indices) # 如果數量不足 20,就全部選入 # 剩下的索引作為 train_set all_indices = set(range(len(augmented_train_set))) # 取得所有索引 train_indices = list(all_indices - set(test_indices)) # 取出剩下的作為訓練數據 # 建立 Subset new_train_set = Subset(augmented_train_set, train_indices) new_test_set = Subset(augmented_train_set, test_indices) # 確保 new_test_set 每個類別都有 20 張,其他為新的train_set test_label_counts = {label: 0 for label in range(len(augmented_train_set.classes))} for idx in test_indices: _, label, _ = augmented_train_set.samples[idx] # <- 直接從 samples 取 label test_label_counts[label] += 1 print("Balanced Test data label 分佈:", test_label_counts) # 重新建立 DataLoader train_loader = DataLoader(new_train_set, batch_size=batch_size, shuffle=True, num_workers=1, pin_memory=True) test_loader = DataLoader(new_test_set, batch_size=batch_size, shuffle=False) ``` 確認一下數量 ```= # new_train_set from collections import defaultdict # 初始化一個字典來計算每個類別的數量 class_count = defaultdict(int) # 遍歷 train_set,統計每個類別的數量 # for idx in new_train_set.indices: # label = augmented_train_set.samples[idx][1] # 取 label # class_count[label] += 1 for _, label in new_train_set: class_count[label] += 1 # 印出每個類別的數量 for class_name, count in class_count.items(): print(f"類別 {class_name}: {count} 張圖片") print() # new_test_set from collections import defaultdict # 初始化一個字典來計算每個類別的數量 class_count = defaultdict(int) # # 遍歷 new_test_set,統計每個類別的數量 # for idx in new_test_set.indices: # label = augmented_train_set.samples[idx][1] # 取 label # class_count[label] += 1 for _, label in new_test_set: class_count[label] += 1 # 印出每個類別的數量 for class_name, count in class_count.items(): print(f"類別 {class_name}: {count} 張圖片") ``` ![截圖 2025-04-03 晚上7.01.33](https://hackmd.io/_uploads/Hkksqyh6kx.png) model ```= import torch import torch.nn as nn class Classifier(nn.Module): def __init__(self): super(Classifier, self).__init__() # 卷積層部分 self.cnn_layers = nn.Sequential( nn.Conv2d(3, 64, 3, 1, 1), # Conv1: 3通道輸入 -> 64通道,3x3卷積,步長=1,填充=1 nn.BatchNorm2d(64), nn.ReLU(), nn.MaxPool2d(2, 2, 0), # 2x2 最大池化,步長=2(尺寸減半) nn.Conv2d(64, 128, 3, 1, 1), nn.BatchNorm2d(128), nn.ReLU(), nn.MaxPool2d(2, 2, 0), nn.Conv2d(128, 256, 3, 1, 1), nn.BatchNorm2d(256), nn.ReLU(), nn.MaxPool2d(2, 2, 0), nn.Conv2d(256, 512, 3, 1, 1), nn.BatchNorm2d(512), nn.ReLU(), nn.MaxPool2d(2, 2, 0), ) self.global_avg_pool = nn.AdaptiveAvgPool2d((1, 1)). # 自適應池化,固定輸出 512 維 # 全連接層部分 self.fc_layers = nn.Sequential( nn.Linear(512, 256), nn.BatchNorm1d(256), nn.ReLU(), nn.Dropout(0.3), nn.Linear(256, 128), nn.BatchNorm1d(128), nn.ReLU(), nn.Dropout(0.3), nn.Linear(128, 11) ) def forward(self, x): x = self.cnn_layers(x) x = self.global_avg_pool(x) x = x.view(x.size(0), -1) # 展平為 [batch_size, 512] x = self.fc_layers(x) return x ``` 處理沒有label的資料,用訓練好的模型對未標註數據進行預測(推論),超過x%準確度就當作是真實的label去訓練 ```= # 半監督學習(Semi-supervised Learning): 當有大量未標註數據時,可以透過 Pseudo-labeling 增強訓練資料 # 模型自訓練(Self-training): 模型自己標註數據,然後用這些標籤來繼續訓練 def get_pseudo_labels(dataset, model, threshold=0.65): # 用 訓練好的模型 預測 dataset 中的影像 device = "cuda" if torch.cuda.is_available() else "cpu" data_loader = DataLoader(dataset, batch_size=batch_size, shuffle=False) # 做推論,不需要打亂順序 model.eval() # Softmax 會將 logits 轉換為 機率分佈(機率總和為 1) # 判斷某張圖片最有可能屬於哪個類別,以及該類別的信心分數 softmax = nn.Softmax(dim=-1) # 進度條 for batch in tqdm(data_loader): img, _ = batch with torch.no_grad(): logits = model(img.to(device)) probs = softmax(logits) # 轉換成機率 # ---------- TODO ---------- # 過濾出高信心的 pseudo-label 樣本 high_confidence_samples = [] high_confidence_labels = [] for i in range(probs.shape[0]): # 遍歷 batch max_prob, pred_label = torch.max(probs[i], dim=-1) # 取得最高機率與對應的類別 if max_prob.item() > threshold: # 只有當信心高於閾值時才使用 high_confidence_samples.append(img[i].cpu()) # 存影像 high_confidence_labels.append(pred_label.item()) # 存標籤 # 回到訓練 model.train() return dataset ``` 訓練 ```= device = "cuda" if torch.cuda.is_available() else "cpu" model = Classifier().to(device) model.device = device criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.0002, weight_decay=1e-5) n_epochs = 80 # 半監督學習的過程 do_semi = True for epoch in range(n_epochs): if do_semi: # 使用訓練過的模型來對未標記的數據(unlabeled_set)進行預測,並生成偽標籤(pseudo-labels) pseudo_set = get_pseudo_labels(unlabeled_set, model) # 將原始標註數據集(train_set)和生成的偽標籤數據集(pseudo_set)合併成一個新的數據集 concat_dataset = ConcatDataset([new_train_set, pseudo_set]) print(len(new_train_set)) print(len(concat_dataset)) train_loader = DataLoader(concat_dataset, batch_size=batch_size, shuffle=True, num_workers=1, pin_memory=True) # ---------- Training ---------- # 訓練 model.train() train_loss = [] train_accs = [] # 進度條 for batch in tqdm(train_loader): imgs, labels = batch logits = model(imgs.to(device)) loss = criterion(logits, labels.to(device)) optimizer.zero_grad() loss.backward() grad_norm = nn.utils.clip_grad_norm_(model.parameters(), max_norm=10) # 限制梯度範圍,避免梯度爆炸 optimizer.step() # 更新模型參數 acc = (logits.argmax(dim=-1) == labels.to(device)).float().mean() train_loss.append(loss.item()) train_accs.append(acc) train_loss = sum(train_loss) / len(train_loss) train_acc = sum(train_accs) / len(train_accs) print(f"[ Train | {epoch + 1:03d}/{n_epochs:03d} ] loss = {train_loss:.5f}, acc = {train_acc:.5f}") # 每個 epoch 結束後清理內存 del train_loss, train_accs torch.cuda.empty_cache() # ---------- Validation ---------- # 驗證 model.eval() valid_loss = [] valid_accs = [] for batch in tqdm(valid_loader): imgs, labels = batch with torch.no_grad(): logits = model(imgs.to(device)) loss = criterion(logits, labels.to(device)) acc = (logits.argmax(dim=-1) == labels.to(device)).float().mean() valid_loss.append(loss.item()) valid_accs.append(acc) valid_loss = sum(valid_loss) / len(valid_loss) valid_acc = sum(valid_accs) / len(valid_accs) print(f"[ Valid | {epoch + 1:03d}/{n_epochs:03d} ] loss = {valid_loss:.5f}, acc = {valid_acc:.5f}") del valid_loss, valid_accs torch.cuda.empty_cache() ``` 測試集 ```= # 測試 model.eval() predictions = [] true_labels = [] # 用來存放真實標籤 for batch in tqdm(test_loader): imgs, labels = batch true_labels.extend(labels.numpy().tolist()) # 儲存真實標籤 with torch.no_grad(): logits = model(imgs.to(device)) # 取得模型的預測結果 preds = logits.argmax(dim=-1).cpu().numpy().tolist() # 取得預測類別 predictions.extend(preds) # 儲存預測標籤 ``` ```= import csv with open("predictions.csv", "w", newline='') as f: writer = csv.writer(f) writer.writerow(["Id", "PredictedCategory"]) for i, pred in enumerate(predictions): writer.writerow([i, pred]) with open("ground_truth.csv", "w", newline='') as f: writer = csv.writer(f) writer.writerow(["Id", "TrueCategory"]) for i, true_label in enumerate(true_labels): writer.writerow([i, true_label]) ``` 準確度 ```= # 準確度 import pandas as pd predictions_df = pd.read_csv("predictions.csv") ground_truth_df = pd.read_csv("ground_truth.csv") merged_df = pd.merge(predictions_df, ground_truth_df, on="Id") correct_predictions = (merged_df["PredictedCategory"] == merged_df["TrueCategory"]).sum() total_predictions = len(merged_df) accuracy = correct_predictions / total_predictions print(f"Test Accuracy: {accuracy:.5f}") ``` ![截圖 2025-04-05 上午11.11.00](https://hackmd.io/_uploads/BkkMQ7ATyx.png) <br/> ## HW 4 : Self-Attention 使用音檔轉換好的vec+label(600位speaker),判斷測試集的label ![截圖 2025-04-03 下午3.16.32](https://hackmd.io/_uploads/rJ4ZI2oayl.png) ![截圖 2025-04-03 下午3.10.20](https://hackmd.io/_uploads/B1SiE3spkg.png) ![截圖 2025-04-03 下午3.02.48](https://hackmd.io/_uploads/S1wVm3jTyg.png) ![截圖 2025-04-03 下午4.27.32](https://hackmd.io/_uploads/H1RoLas6Jg.png) ![截圖 2025-04-03 下午3.04.12](https://hackmd.io/_uploads/SyyfQ2oTkl.png) ![截圖 2025-04-03 下午3.04.21](https://hackmd.io/_uploads/HkTzXhspJe.png) 資料集 metadata.json 所有訓練資料的singal+speaker id testdata.json mappong.json 600維的vec,mapping會對應到speaker id ![截圖 2025-04-03 下午4.18.22](https://hackmd.io/_uploads/HJ_d4ai6Jx.png) ```= !pip install -U gdown !gdown --id 1KBL3eyk91vXpyJXsbQ4ZINqVp_Adv6DA -O Dataset.tar.gz !tar -xzvf Dataset.tar.gz ``` ![截圖 2025-04-13 下午4.13.04](https://hackmd.io/_uploads/rkb7fgtRkl.png) ![截圖 2025-04-15 晚上8.50.37](https://hackmd.io/_uploads/HkB480oCJx.png) ```= import os import json os.getcwd() with open(r"/content/Dataset/metadata.json", 'r') as f: metadata = json.load(f) metadata # feature_path:存儲特徵的檔案名(通常是 .pt 格式的 PyTorch 張量) # mel_len:這段語音的 Mel-spectrogram 長度(也就是時間維度上有多少個 frame ``` ![截圖 2025-04-13 下午4.19.42](https://hackmd.io/_uploads/BJ5n7xKCkl.png) ```= import json with open(r"/content/Dataset/testdata.json", 'r') as f: testdata = json.load(f) testdata ``` ![截圖 2025-04-13 下午4.20.19](https://hackmd.io/_uploads/r1n1NxYR1l.png) ```= import json with open(r"/content/Dataset/mapping.json", 'r') as f: mapping = json.load(f) mapping ``` ![截圖 2025-04-13 下午4.21.44](https://hackmd.io/_uploads/ByzNNlY0kg.png) 訓練集的人數,600人 ```= speaker_ids = list(metadata['speakers'].keys()) print(speaker_ids) print(len(speaker_ids)) ``` ![截圖 2025-04-13 下午4.15.02](https://hackmd.io/_uploads/rkUczxKRJl.png) 統計每個speaker 語音樣本數量 ```= speaker_counts = {speaker: len(utterances) for speaker, utterances in metadata['speakers'].items()} print(speaker_counts) ``` ![截圖 2025-04-13 下午4.38.59](https://hackmd.io/_uploads/H1QEOxYAJg.png) ```= import numpy as np import torch import random def set_seed(seed): np.random.seed(seed) random.seed(seed) torch.manual_seed(seed) if torch.cuda.is_available(): torch.cuda.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.backends.cudnn.benchmark = False # 關閉 cudnn 的優化,強制使用確定性算法 torch.backends.cudnn.deterministic = True set_seed(87) ``` ```= import os import json import torch import random from pathlib import Path from torch.utils.data import Dataset from torch.nn.utils.rnn import pad_sequence class myDataset(Dataset): def __init__(self, data_dir, segment_len=128): # 語音切片的長度 self.data_dir = data_dir self.segment_len = segment_len # mapping mapping_path = Path(data_dir) / "mapping.json" # mapping_path = Path("/workspace/Dataset/mapping.json") mapping = json.load(mapping_path.open()) self.speaker2id = mapping["speaker2id"] # metadata speakers metadata_path = Path(data_dir) / "metadata.json" # metadata_path = Path("/workspace/Dataset/metadata.json") metadata = json.load(open(metadata_path))["speakers"] self.speaker_num = len(metadata.keys()) self.data = [] for speaker in metadata.keys(): for utterances in metadata[speaker]: self.data.append([utterances["feature_path"], self.speaker2id[speaker]]) def __len__(self): return len(self.data) def __getitem__(self, index): feat_path, speaker = self.data[index] mel = torch.load(os.path.join(self.data_dir, feat_path)) if len(mel) > self.segment_len: start = random.randint(0, len(mel) - self.segment_len) mel = torch.FloatTensor(mel[start:start+self.segment_len]) else: mel = torch.FloatTensor(mel) speaker = torch.FloatTensor([speaker]).long() return mel, speaker def get_speaker_number(self): return self.speaker_num ``` ```= # dataloader import torch from torch.utils.data import DataLoader, random_split from torch.nn.utils.rnn import pad_sequence # 輸入 myDataset 的一個 batch # 合併函數,將一批樣本(list of tuples)組合成單一 batch,特別針對語音長度不一樣的情況進行處理 # 用 -20 當作 padding 值(因為 Mel 頻譜是 log 特徵,-20 很小,不會影響模型學習) def collate_batch(batch): """Collate a batch of data.""" mel, speaker = zip(*batch) mel = pad_sequence(mel, batch_first=True, padding_value=-20) # Debugging: Print the length of mel for each batch print(f"Batch mel shape: {[m.shape for m in mel]}") return mel, torch.FloatTensor(speaker).long() def get_dataloader(data_dir, batch_size, n_workers): # 用來載入資料的 subprocess 數量(多執行緒資料加載) # 前面定義好的myDataset # 會解析資料、預處理 Mel 頻譜特徵、轉成 [mel_tensor, speaker_id] 組合 dataset = myDataset(data_dir) speaker_num = dataset.get_speaker_number() # 切分資料 # trainlen = int(0.9 * len(dataset)) # lengths = [trainlen, len(dataset) - trainlen] # trainset, validset = random_split(dataset, lengths) dataset_size = len(dataset) train_size = int(0.8 * dataset_size) valid_size = int(0.1 * dataset_size) test_size = dataset_size - train_size - valid_size print(f"train:{train_size}, valid:{valid_size}, test:{test_size}") trainset, validset, testset = random_split(dataset, [train_size, valid_size, test_size]) # 建立 DataLoader # train train_loader = DataLoader( trainset, shuffle=True, batch_size=batch_size, num_workers=n_workers, drop_last=True, # 丟掉最後不足一個 batch 的樣本(可避免某些計算錯誤) pin_memory=True, # 加速 CPU-to-GPU 的資料傳輸 collate_fn=collate_batch, # 自定義如何把一批樣本合併成一個 batch ) # valid valid_loader = DataLoader( validset, batch_size=batch_size, num_workers=n_workers, drop_last=True, pin_memory=True, collate_fn=collate_batch, ) # test test_loader = DataLoader( testset, batch_size=batch_size, num_workers=n_workers, drop_last=True, pin_memory=True, collate_fn=collate_batch, ) trainset_len = len(trainset) validset_len = len(validset) testset_len = len(testset) return train_loader, valid_loader, test_loader, speaker_num, trainset_len, validset_len, testset_len ``` ```= # model import torch import torch.nn as nn import torch.nn.functional as F class Classifier(nn.Module): # 40 -> 80, 說話者的數量, 丟棄 def __init__(self, d_model=80, n_spks=600, dropout=0.1): super().__init__() self.prenet = nn.Linear(40, d_model) # TODO: # Change Transformer to Conformer. # https://arxiv.org/abs/2005.08100 # Multi-head Attention 多頭注意力 # 讓模型從不同的子空間中獨立學習注意力權重,進而捕捉更豐富的語境和依賴關係 self.encoder_layer = nn.TransformerEncoderLayer( d_model=d_model, dim_feedforward=256, nhead=2 ) self.encoder = nn.TransformerEncoder(self.encoder_layer, num_layers=2) # encoder_layer)堆疊兩層 # 輸出預測 self.pred_layer = nn.Sequential( nn.Linear(d_model, d_model), nn.ReLU(), nn.Dropout(dropout), nn.Linear(d_model, n_spks), ) def forward(self, mels): """ args: mels: (batch size, length, 40) return: out: (batch size, n_spks) """ # (batch, length, d_model) out = self.prenet(mels) # 線性轉換到 d_model 維度 # 使用 permute 來改變維度順序 (length, batch size, d_model) out = out.permute(1, 0, 2) # encoder layer 捕捉序列內各時間步之間的關係 out = self.encoder_layer(out) # 輸出轉換回 (batch size, length, d_model) 的格式 out = out.transpose(0, 1) # 平均池化 mean pooling,有時間步特徵彙總為一個固定維度的向量 stats = out.mean(dim=1) # MLP 預測說話者 out = self.pred_layer(stats) return out ``` ```= # learning rate # 學習率 (learning rate) 的變化 import math import torch from torch.optim import Optimizer from torch.optim.lr_scheduler import LambdaLR # 建立學習率排程器 def get_cosine_schedule_with_warmup( optimizer: Optimizer, num_warmup_steps: int, num_training_steps: int, num_cycles: float = 0.5, # cosine 循環的次數,預設是 0.5,代表從 1 衰減到 0(半個波形) last_epoch: int = -1, # 在重新載入模型訓練時,指定上一個 epoch 的編號 ) # 計算 目前這個 step 的學習率倍率(會乘上 optimizer 的初始 learning rate) def lr_lambda(current_step): # 如果 current_step < num_warmup_steps,就線性遞增 if current_step < num_warmup_steps: return float(current_step) / float(max(1, num_warmup_steps)) # Cosine,衰退階段,從 warmup 結束後到目前步數所經歷的進度,範圍從 0 到 1 progress = float(current_step - num_warmup_steps) / float( max(1, num_training_steps - num_warmup_steps) ) return max( 0.0, 0.5 * (1.0 + math.cos(math.pi * float(num_cycles) * 2.0 * progress)) ) return LambdaLR(optimizer, lr_lambda, last_epoch) ``` ```= # model function import torch # DataLoader 的 batch def model_fn(batch, model, criterion, device): """Forward a batch through the model.""" # 特徵和標籤拆分 # batch 是從 DataLoader 中來的,一個 batch 的資料 mels, labels = batch mels = mels.to(device) labels = labels.to(device) # 呼叫模型的 forward 方法,取得 預測的 logits outs = model(mels) loss = criterion(outs, labels) # 取得每個樣本預測的 機率最大(最有可能)的說話者 ID preds = outs.argmax(1) # 比對預測和真實標籤是否一致,轉為 float(0 或 1),最後對整個 batch 平均,算出準確率 accuracy = torch.mean((preds == labels).float()) return loss, accuracy ``` ```= # validate !pip install tqdm from tqdm import tqdm import torch def valid(dataloader, model, criterion, device): model.eval() # 評估模式 running_loss = 0.0 running_accuracy = 0.0 pbar = tqdm(total=len(dataloader.dataset), ncols=0, desc="Valid", unit=" uttr") for i, batch in enumerate(dataloader): with torch.no_grad(): # 呼叫 model_fn 取得 loss 和 accuracy loss, accuracy = model_fn(batch, model, criterion, device) running_loss += loss.item() running_accuracy += accuracy.item() pbar.update(dataloader.batch_size) pbar.set_postfix( loss=f"{running_loss / (i+1):.2f}", accuracy=f"{running_accuracy / (i+1):.2f}", ) pbar.close() model.train() return running_accuracy / len(dataloader) ``` ```= # main from tqdm import tqdm import torch import torch.nn as nn from torch.optim import AdamW from torch.utils.data import DataLoader, random_split # steps_per_epoch = train 45332 // batch 64 = 708 # num_epochs = total steps 30000 // 708 = 42.4 def parse_args(): """arguments""" config = { "data_dir": "/workspace/Dataset", "save_path": "/workspace/model.ckpt", "batch_size": 64, "n_workers": 1, "valid_steps": 1000, # 每多少步進行一次驗證 "warmup_steps": 1000, # 學習率預熱步數 "save_steps": 5000, # 總訓練步數 "total_steps": 30000, # 每多少步儲存一次最佳模型,每個vaild會有 70000 // 2000 = 35圈 } return config # 安排整個模型訓練流程 def main( data_dir, save_path, batch_size, n_workers, valid_steps, warmup_steps, total_steps, save_steps, ): device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(f"[Info]: Use {device} now!") train_loader, valid_loader, test_loader, speaker_num, trainset_len, validset_len, testset_len = get_dataloader(data_dir, batch_size, n_workers) # print(f"Train set size: {trainset_len} samples") # print(f"Valid set size: {validset_len} samples") # print(f"Test set size: {testset_len} samples") for mel, speaker in train_loader: # 印出第一筆資料 print(f"First batch mel shape: {mel.shape}, Speaker shape: {speaker.shape}") break # 可以用 next() 一個一個拿 batch。 train_iterator = iter(train_loader) print(f"[Info]: Finish loading data!",flush = True) # n_spks 說話人的總類數 → 決定輸出層維度 model = Classifier(n_spks=speaker_num).to(device) criterion = nn.CrossEntropyLoss() optimizer = AdamW(model.parameters(), lr=1e-3) scheduler = get_cosine_schedule_with_warmup(optimizer, warmup_steps, total_steps) # 調整學習率 print(f"[Info]: Finish creating model!",flush = True) # 先存test_loader & test正確結果 mel_list = [] answer_list = [] # 這裡遍歷 test_loader,取出 mel 和 speaker for mel, speaker in test_loader: mel_list.append(mel) answer_list.append(speaker) # 將它們合併成 numpy array,或者可以保存成文件 mel_tensor = torch.cat(mel_list, dim=0) answer_tensor = torch.cat(answer_list, dim=0) # 保存至文件(如果需要) torch.save(mel_tensor, '/workspace/Dataset/test_mel_tensor.pt') # 題目 torch.save(answer_tensor, '/workspace/Dataset/test_answer_tensor.pt') # 解答 # best_accuracy = -1.0 best_state_dict = None pbar = tqdm(total=valid_steps, ncols=0, desc="Train", unit=" step") for step in range(total_steps): epoch_id = step // valid_steps + 1 # 每 valid_steps 為一圈 # 如果 train_loader 資料用完了,就重啟一輪 try: batch = next(train_iterator) except StopIteration: train_iterator = iter(train_loader) batch = next(train_iterator) # model_fn() 前向 + 算 loss/accuracy loss, accuracy = model_fn(batch, model, criterion, device) batch_loss = loss.item() batch_accuracy = accuracy.item() # 更新模型 loss.backward() optimizer.step() scheduler.step() optimizer.zero_grad() # Log pbar.update() pbar.set_postfix( loss=f"{batch_loss:.2f}", accuracy=f"{batch_accuracy:.2f}", step=step + 1, ) # 驗證集 if (step + 1) % valid_steps == 0: pbar.close() valid_accuracy = valid(valid_loader, model, criterion, device) # 如果驗證表現進步,就更新最好的準確率與模型參數 if valid_accuracy > best_accuracy: best_accuracy = valid_accuracy best_state_dict = model.state_dict() pbar = tqdm(total=valid_steps, ncols=0, desc=f"Train (Round {epoch_id + 1})", unit=" step") # desc="Train" # 每隔 save_steps 步,把目前「最好的模型」存入 if (step + 1) % save_steps == 0 and best_state_dict is not None: torch.save(best_state_dict, save_path) pbar.write(f"Step {step + 1}, best model saved. (accuracy={best_accuracy:.4f})") pbar.close() # 代入所有參數 # ** 的意思是「把這個字典展開成關鍵字參數」傳進去 main(...) 函數中 if __name__ == "__main__": main(**parse_args()) ``` ![截圖 2025-04-16 下午6.07.58](https://hackmd.io/_uploads/ByCtZZa0kl.png) ```= # 進行推論(inference) class InferenceDataset(Dataset): def __init__(self, mel_tensor_path): self.mel_tensor = torch.load(mel_tensor_path) self.data_len = self.mel_tensor.shape[0] def __len__(self): return self.data_len def __getitem__(self, index): mel = self.mel_tensor[index] feat_path = f"sample_{index}" # 給每個 sample 一個 ID return feat_path, mel ``` ```= import json import csv from pathlib import Path !pip install tqdm from tqdm.notebook import tqdm import torch from torch.utils.data import DataLoader def parse_args(): """arguments""" config = { "data_dir": "/workspace/Dataset", # 測試資料的資料夾路徑 "model_path": "/workspace/model.ckpt", # 訓練好的模型權重檔 "output_path": "/workspace/output.csv", # 預測結果輸出的 csv 檔名 "mel_tensor_path": "/workspace/Dataset/test_mel_tensor.pt", # test_mel_tensor.pt 的路徑 } return config def main( data_dir, mel_tensor_path, model_path, output_path, ): device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(f"[Info]: Use {device} now!") mapping_path = Path(data_dir) / "mapping.json" mapping = json.load(mapping_path.open()) # 測試集 # dataset = InferenceDataset(data_dir) # dataloader = DataLoader( # dataset, # batch_size=1, # shuffle=False, # drop_last=False, # num_workers=2, # collate_fn=inference_collate_batch, # ) # print(f"[Info]: Finish loading data!",flush = True) # 加載 test_mel_tensor.pt dataset = InferenceDataset(mel_tensor_path) dataloader = DataLoader( dataset, batch_size=1, shuffle=False, drop_last=False, num_workers=2, ) print(f"[Info]: Finish loading data!", flush=True) speaker_num = len(mapping["id2speaker"]) model = Classifier(n_spks=speaker_num).to(device) model.load_state_dict(torch.load(model_path)) model.eval() print(f"[Info]: Finish creating model!",flush = True) results = [["Id", "Category"]] for feat_paths, mels in tqdm(dataloader): with torch.no_grad(): mels = mels.to(device) outs = model(mels) preds = outs.argmax(1).cpu().numpy() for feat_path, pred in zip(feat_paths, preds): results.append([feat_path, mapping["id2speaker"][str(pred)]]) with open(output_path, 'w', newline='') as csvfile: writer = csv.writer(csvfile) writer.writerows(results) if __name__ == "__main__": main(**parse_args()) ``` ```= !pip install pandas import torch import pandas as pd import json # 讀取 mapping,用來把 speaker index 轉成 speaker 名字 with open("Dataset/mapping.json") as f: mapping = json.load(f) speaker2id = {v: int(k) for k, v in mapping["id2speaker"].items()} # 反過來 # 讀取 output.csv 預測結果 df = pd.read_csv("output.csv") pred_labels = [speaker2id[label] for label in df["Category"]] # 讀取正確答案 true_labels = torch.load("/workspace/Dataset/test_answer_tensor.pt").tolist() # 計算準確率 correct = sum([p == t for p, t in zip(pred_labels, true_labels)]) accuracy = correct / len(true_labels) print(f"[Accuracy] {accuracy:.4f}") ``` 只跑15個epochs最佳模型的結果 ![截圖 2025-04-16 下午6.12.20](https://hackmd.io/_uploads/BJKqzb6Akg.png)