###### tags: `Pytorch 筆記`
{%hackmd @kk6333/theme-sty1 %}
# Pytorch 筆記 : 建構神經網路、訓練
### 0. Import
```
import torch # pytorch 模組
import torch.nn as nn # 用以建構神經網路
```
---
<br>
### 1. 建構 Model (加上設定 loss、optim)
自訂一個 Model 類別
並繼承 **nn.Module** 類別
一定要有此兩個函數
- init : 初始化模型,主要在此設定神經網路
- forward : 傳入 input 並進行神經網路的向前傳播
```python=
class MyModel(nn.Module):
def __init__(self) :
super( MyModel, self ).__init__()
self.layers = nn.Sequential(
nn.Linear(4, 32),
nn.ReLU(),
nn.Linear(32,64),
nn.ReLU(),
nn.Linear(64,3)
)
def forward(self, x):
return self.layers(x)
```
設定 loss function、optimizer
這邊舉例 MSE loss 和梯度下降優化
```python=
loss_fn = nn.MSELoss()
optimizer = torch.optim.SGD( model.parameters(), lr=lr )
```
---
<br>
### 2. 設置 Training 步驟
訓練步驟要依照不同目標而有所改變
這裡舉例基礎的分類問題步驟
- Epoch
- **使用訓練集 Batch 訓練**
- 將 x,y 設到指定 device
- 進行預測
- 計算 loss
- 向後傳播
- 優化參數
- 紀錄
- **使用驗證集做驗證**
- 將 x,y 設到指定 device
- 進行預測
- 計算 loss
- 紀錄
```python=
def train( train_dataloader, val_dataloader, epochs, model, loss_fn, optimizer, device, Print_info=True ):
train_loss_list = []
val_loss_list = []
for epoch in range(epochs):
loss = 0
###### Train ########
for X,y in train_dataloader:
X = X.to(device)
y = y.to(device)
pred_y = model( X )
loss = loss_fn(pred_y, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss_list.append(loss.item())
###### Validation ######
correct = 0
with torch.no_grad():
for X,y in val_dataloader:
X = X.to(device)
y = y.to(device)
pred_y = model( X )
loss = loss_fn(pred_y, y)
val_loss_list.append(loss.item())
return train_loss_list, val_loss_list
```