---
tags: tensorflow,keras,cohandout
---
# mnist手寫辨識
引入mnist手寫辨識資料
``` python=
from tensorflow.keras.datasets import mnist
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
(x_train, y_train), (x_test,y_test) = mnist.load_data()
def one_hot(data,size):
shape=(len(data),size)
val=np.zeros(shape=shape)
for i in range(len(data)):
val[i][data[i]]=1
return val
np.set_printoptions(edgeitems=256)
```
先查看圖片維度
```python=
img=x_train[0]
plt.imshow(img, cmap='gray') #灰階
print(x_train.shape)
```
將資料進行歸一化
```python=
#print(x_train[0])
x_train=x_train/255
y_train=one_hot(y_train,10) ## 將訓練資料從數值轉成one-hot code
print(y_train)
x_train=x_train.reshape((60000,28*28)) #可以寫-1會自動找大小
x_test=x_test/255
print(x_test[0])
y_test=one_hot(y_test,10)
```
reshape陣列大小
``` python=
x_test=x_test.reshape((-1,28*28)) #可以寫-1會自動找大小
print(x_test.shape)
print(x_train.shape)
```
用keras建立模型
softmax,可將有限項離散概率分布的梯度對數歸一化
分母是結果相加,分子為個別輸出

``` python=
from tensorflow.keras import models, layers
model=models.Sequential()
model.add(layers.Dense(600, input_dim=784,activation='tanh')) #512 128 通常以2的指數
model.add(layers.Dense(100,activation='tanh')) #128 32
model.add(layers.Dense(10,activation='softmax'))
model.summary()
```
預測結果
``` python=
predict = model.predict(x_test)
print(predict.argmax(axis=1))
print(y_test.argmax(axis=1))
```
算實際準確度
``` python=
count=np.equal(predict.argmax(axis=1),y_test.argmax(axis=1)).sum()
print(count/len(x_test))
```

## 補充
numpy argmax用法 [網址](https://blog.csdn.net/qq1483661204/article/details/78959293)