# AI4Y研習營
陳詩諺:https://hackmd.io/@QadYft6CTh6AkBUQ5xWi7g/H1hOVn1jc
## D2
### CNN進行MNIST
#### 資料集準備
#### 分類模型
測試模型
~~怕模型背答案~~
輸入層
捲積層
池化層
攤平層
全連接層
輸出層
softmax:機率化
ReLU:修正線性單元
#### 訓練參數設定
Optimizer策略,調整參數以降低loss
#### 訓練模型
Epochs世代(跑過模型一回)
Batch size依次讀入多少資料(跑資料>>>加總誤差>>>調整)
Early stop
判斷epochs是否更佳,若更差,則停止訓練並採用先前epoch的權重作為最終模型權重
Loss function
模型預測值與真實不一致程度
evaluation metrics
判斷分類模型優劣
***
### colab 實作
```
!nvidia-sm
```
導入模組
```python=
import numpy as nn
import matplotlib.pyplot as ppp
from tensorflow import *
from tensorflow.keras import *
```
```python=
(x_train,y_train),(x_test,y_test)=keras.datasets.mnist.load_data()
x_validation=x_train[-10000:]
y_validation=y_train[-10000:]
x_train=x_train[:-10000]
y_train=y_train[:-10000]
print("training data",x_train.shape)
print("validation data",x_validation.shape)
print("testing data",x_test.shape)
```
```python=
x_train=x_train.astype("float32")/255
x_validation=x_validation.astype("float32")/255
x_test=x_test.astype("float32")/255
```
```python=
x_train=np.expand_dims(x_train,-1)
x_validation=np.expand_dims(x_validation,-1)
x_test=np.expand_dims(x_test,-1)
print(x_train.shape)
print(x_validation.shape)
print(x_test.shape)
num_classes=10
y_train=keras.utils.to_categorical(y_train,num_classes)
y_validation=keras.utils.to_categorical(y_validation,num_classes)
y_test=keras.utils.to_categorical(y_test,num_classes)
print(y_train[0]
```
```python=
input_shape=(28,28,1)
model=keras.Sequential(
[
keras.Input(shape=input_shape),
layers.Conv2D(32,kernel_size=(3,3),activation="relu"),
layers.MaxPooling2D(pool_size=(2,2)),
layers.Flatten(),
layers.Dense(128,activation='relu'),
layers.Dense(num_classes,activation="softmax")
]
)
print(model.summary())
```
```python=
model.compile(
loss="categorical_crossentropy",
optimizer="adam",
metrics=["accuracy"]
)
batch_size = 256
epochs = 15
my_callbacks = [
keras.callbacks.EarlyStopping(
patience = 5,#容錯
monitor = "val_accuracy",
restore_best_weights=True
)
]
#training
result = model.fit(
x_train, y_train,
batch_size = batch_size,
epochs = epochs,
validation_data=(x_validation,y_validation),
callbacks=my_callbacks
)
```
```python=
history_dict=result.history
loss_values=history_dict["loss"]
val_loss_values=history_dict["val_loss"]
epochs=range(1,len(loss_values)+1)
plt.plot(epochs,loss_values,"bo",label="Training loss")
plt.plot(epochs,val_loss_values,"b",label="Validation loss")
plt.title("Training and validation loss")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()
score=model.evaluate(x_test,y_test)
print("Test loss",score[0])
print("Test accuracy",score[1])
```
```python=
from PIL import Image
import tensorflow as tf
from google.colab import files
def loadImage(filenames):
img = Image.open(filenames).convert("L")
img = img.resize(28,28)
img_arr = np.asarray(img,dtype="float32")
return img_array
#上傳自己的
uploaded = files.upload()
file.names = list(upload.keys())[0]
predict_data = loadImage(file_name)
#轉成模型能接受的類型
image_list = []
image_list.append(predict_data)
arr1 = np.array(image_list).reshape((1,28,28,1))
#預測結果
prediction = model.predict(arr1)
print(file_name+"的預測結果如下:")
for i in range(10):
print("是"+str(i)+"的機率:"+"%.4f" %prediction[0][i])
```
### OpenVINO
FPGA(現場可程式化邏輯閘陣列)->arduino