# 印刷字體辨識模型訓練紀錄
準確度與誤差值計算方式
```python3
model = load_model("TrainChar.h5") # TrainChar.h5為訓練好後儲存的模型
model.evaluate(test_img_normalize, test_label_onehot)
# test_img_normalize 為經過前處理的測試資料
# test_label_onehot 為轉為one hot encoding 的測試資料標籤
```
----
## 第一次訓練 acc: 0.533 loss: 30.989
- 卷積神經網路(CNN)
- 2層卷積
- 2層池化
- 2層隱藏
```python3
# 卷積層
filters = 100, kernel_size=(3,3), padding = 'same', input_shape = (28, 28, 1), activation = 'relu'
filters = 200, kernel_size=(3,3), padding = 'same', activation = 'relu'
# 池化層
pool_size = (2, 2)
# 隱藏層
units = 512, activation = 'relu'
units = 512, activation = 'relu'
```
- 結果
[30.989418029785156, 0.5333333353201548]
----
## 第二次訓練 acc: 0.499 loss: 35.235
- 卷積神經網路(CNN)
- 2層卷積
- 2層池化
- 2層隱藏
```python3
# 卷積層
filters = 50, kernel_size=(3,3), padding = 'same', input_shape = (28, 28, 1), activation = 'relu'
filters = 100, kernel_size=(3,3), padding = 'same', activation = 'relu'
# 池化層
pool_size = (2, 2)
# 隱藏層
units = 512, activation = 'relu'
units = 512, activation = 'relu'
```
- 結果
[35.234667777565562, 0.4986774834588013]
----
## 第三次訓練 acc: 0.667 loss: 15.201
- 卷積神經網路(CNN)
- 3層卷積
- 3層池化
- 2層隱藏
```python3
# 卷積層
filters = 200, kernel_size=(3,3), padding = 'same', input_shape = (28, 28, 1), activation = 'relu'
filters = 400, kernel_size=(3,3), padding = 'same', activation = 'relu'
filters = 800, kernel_size=(3,3), padding = 'same', activation = 'relu'
# 池化層
pool_size = (2, 2)
# 隱藏層
units = 512, activation = 'relu'
units = 512, activation = 'relu'
```
- 結果
[15.201334381103516, 0.6666666690508525]