neural network is like a brain.
training a neural net is like when you teach a child
神经网络就像一个大脑。训练一个神经网络就像教小孩學習新東西
Wio Terminal has accelerators and buttons and you need to teach the device that when you are jumping
Wio Terminal 是一塊自帶加速度器和按钮的開發板產品,用戶需要讓這塊開發板知道,在執行“跳起来”這個動作的时候
the device continueous monitor the x-axis and y-axis and when it sees the y-axis is moving then it knows you are jumping
设备持續回傳加速度器上X轴和Y轴的數據,這些數據會形成“某些特定”的劇烈變化,因此能讓機器“知道”用戶這個時候在做“跳躍”的動作。
then use this link to the space bar (key stroke).
然后把這個信息傳輸到電腦的到空格键。就能通過用戶的跳躍動作控制空格鍵按下,並控制chrome中的小恐龍起跳。
Seeed Studio Wio terminal (hardware)
Seeed Studio Wio terminal(硬件)
Edge Impulse platform (software; a plateform for machine learning)
Edge Impulse(软件;机器学习的平台)
to train the device, here we use the edge impulse studio
我们使用 edge impulse来训练Wio terminal(硬件)
(edge impulse can be used for text? NLP?)
(edge impulse 可用于文字處理嗎?NLP(Natual Language Processing 自然語言學 習?)
use a mouse to teach a child to recognize a mouse
人們往往傾向用生活中真實的的事物來教育小孩,比如用真的“老鼠”來教孩子什麽是老鼠。
to know "a jump" use the data from an accelerator
use the data to train the algorithm to give an output (key stroke)
同樣,我們需要用加速器回傳的数据来认识 "跳跃"這個動作。
用收集到的原始数据来训练算法,使其产生一个输出(最終這個輸出會引發電腦上空格按键按下的動作)。
then we can depoly the model parameters to the box
然后,我们就可以把模型参数保存在wio terminal中。
one needs sufficient data for training
另外,我们需要大量的数据进行训练
(create an account) to create a new project (dino-jump).
(创建一个账户)来创建一个新的项目(Dino-jump)。
following the documentation
(we can use a mobile phone, but we need to simulate a key stroke/keyboard to send the key press signal)
根據官網文件資料顯示
(我們可以用手機收集數據或者完成類似的專案,但是除了手機之外,我們還需要模擬一個敲鍵/鍵盤來發送按鍵信號)
蒐集資料
it takes 30 jumps to train for Salman Faris.
以Salman Faris來說需要30次跳的動作
cortex-m4
motion sensor
usb type-c connector
support edge impulse
edge impulse 支援Arm cortex-m4, 動作感測器, usb type-c連接座
d/l the firmware to the device (wio terminal) "program" the flash memory
將韌體灌入wio terminal,程序flash memory
after collect the data, connect to the device to your computer (usb) for setting data acquasition (DAQ)
sample length; sampling frequency; which sensors…
採集數據後,將設備連接到您的電腦(usb孔)進行數據採集設置(DAQ)
例如樣本長度;採樣頻率;哪些傳感器等
then click sampling ==> jump! (with wio in the pocket)
然後點取樣(sampling),將wio放在口袋中開始 跳!
once the sampling is complete you can se the xyz axes data in different color shown on the page. If the data is not desire, you can delete the data and repeat.
取樣完成後,您可以設置頁面上顯示的不同顏色的 xyz 軸。 如果是不需要的數據,可以刪除當前數據並重新再收集一次。
(the daq freq. selection depends on the spec. of the device)
(daq 取樣頻率的選擇取決於設備的規格)
then we take another set of data for "not jumping" (idle)
one can see x-y-z in flat.
然後我們取另一組“不跳”(idle)的數據
人們可以在坐標系上看到 x-y-z軸上的數據。
(do we need to do data shaping? yes)
(我們需要做數據調整嗎?是的)
use the project to know how to do "machine learning"
使用專案了解如何做“機器學習”
ML relies on "correct" data
機器學習依賴於“正確”的數據
edge impulse has "supportied" device list; if one wants to use a different deivce, one can try to "porting" information with SDK. (community support)
edge impulse有“支持”的設備列表; 如果想使用不同的設備,可以嘗試使用SDK“移植”信息。 (社群支持)
one can check the data collection from edge impulse web.
可以從網路檢查edge impulse的數據採集。
split the training data and test data.
label the data for jumping and idle
==> extract the feature
分類訓練資料跟測試資料
標示出跳躍的資料及不是跳躍的資料
==>提取特徵
for time series data we set the window size and processing block
我們按照時間順序給資料設定window size(用某個窗口限定數據收集的範圍)跟processing block (按照區塊的形式處理數據)
EI supports audio, image, spectra…
edge impulse 支援音訊,影像,光譜等
then choose the algorithm, here we use Karas.
接下來,我們需要選擇演算法,這裡我們使用Karas算法
then the ouput feature will be 2 (jump, idle)
在我們的實驗中,我們設定輸出特徵為2種:(跳,不跳)
to test the model, one can see the result of "generate feature"
"feature explore" can visualize the result.
為了測試模型,可以看”特徵產生(generate feature)”的結果
“feature explore”可以讓用戶查看可視化的結果
the result comes from the accelemeter root-mean-square (RMS) of each axis.
結果來自於加速度規三軸上收集到數據的方均根(RMS)
we can also set the parameters for training, ex. the epochs. (number of iterations)
我們可以設置訓練用的參數,例如epochs(迭代的次數)
finally we can get the arch. of the neurons. (NN parameters)
最終,我們得到神經元的架構. (NN parameters)
for the whole project, you don't need to learn ML(Machine Learning).
整個專案,我們不一定非要對機器學習有一定的理解
for human, one learns from the shape, color… of objects. These are the features of objects.
對人類來說,物體的形狀,顏色..這些是物體的"特徵", 這裏提到“特徵”的原因是因爲,機器學習中很重要的一個部分就是提取“相似數據”中的“特徵”。只有通過算法成功區分出給定數據中的特徵,我們才能更好的對數據進行“分類”(classification)
(the training data consists of the information from different features of the objects)
(訓練資料包含這些不同特徵的資訊)
then we can test the model using test data:
salman try to use data with wio moving up/down with hand and comparing with those with wio put in the pocket.
然後我們可以利用這些資料測試模型
Salman試著使用將wio拿在手中跳/不跳,比較三軸加速度計輸出數據的不同
so one need to jump as high as that of training data. :D
(the reason for jumping high is to obtain the samples of "jumping status" that has unique loction in 3D axis comparing to the "ideal status")
所以必須盡量跳高以使訓練模型容易分辨,使與不跳的資料盡量顯現差異, 這種差異會進一步導致“特徵”上的區別
from the result of test data, one can see the performance of the model.
從測試資料的結果,我們可以看到模型的表現
Then we can install the trained model back to wio. (d/l the model and deploy it to the device) by creating lib. or build f/w.
然後我們能將訓練模型傳回wio. 經由建立lib或件建立模型
For a small power device, the "EON" compiler can optimze the RAM and flash memory used for the code. (ex. use int8 instead of float32)
對於小型的移動設備,“EON”編譯器可以將硬體設備的RAM和flash的使用最優化(比如説,在代碼中盡量使用int8而不是float32數據類型)
(do we need to know abou the neuron arch. after training? no, edge impulse will take care of it. The layers of NN needed depends on the application.)
(我們需要知道訓練完成後的神經網絡的結構嗎?不需要, edge impulse會自動考量這些,所需要的神經網絡層數與實際應用的場景有關)
active function, loss function…
效用函數和損失函數的考量
one can also try different model (different nurons, layers…) and see the results.
可以試著使用不同的模型(嘗試不同數量的神經元,訓練層數..)來查看結果
underfitter (undertraining)
overfitting (the model find some details feature of specific data that may degrade the accuracy)
如果沒有訓練好,模型就會“欠擬合”
如果模型在訓練過程中準確性過高,就會“過擬合”,實際情況的表現反而不理想
(more layers / neurons doesn't necessarily give you a better result)
(層數/神經元數量越多,并不意味著有更好的訓練結果)
the program will run continueously
once "jump" classification value is > 0, it will send output to serial and send "up" key stroke.
程式將持續執行
一但”跳起”辨認值大於0,他將會輸出字串並發送”上”的鍵值
one can also use RPi or a computer with camera.
nordic 52840 can also be used (usb dongle with BT)
mobile phone is also supported
也可以使用樹梅派或帶攝像頭的計算機。
也可以使用nordic 52840(帶藍芽的USB加密狗)
也支持手機
(mind it takes some time to sense and react, so need to taking this into account when using it to play a game)
(注意:硬體偵測和反應可能需要一些時間,所以在玩遊戲時需要考慮到這一點)
In India, makers are called jugaad (DIY)
In India, each state have different language
places for maker: makergram.com, raspberry pi jam kochi
There are so many completition online on hackster.io
在印度,創客被称为jugaad(DIY)。
在印度,每个州都有不同的语言
創客綫上聚集的地方有:makergram.com, raspberry pi jam kochi
hackster.io,這些網站上面有很多正在進行的專案。
Can only do two or three jumps in the game.
Only two state, idle and jump, this model is pretty accuracy
There is a huge gap between idle and jump, that's pretty accurate like 100%
在直播時只跳了两到三次。
因爲模型只存在两种輸出結果,分別是“空闲”和“跳跃”,兩者從數據上來看有很大的差距,所以結果是相当准确的 (100%的準確率)
Edge Impulse 是佃農的觀念,指定設備讓大家餵資料
WIO Terminal是指定設備,RTL8720是連線用,計算在另一顆ATSAMD51P19
FFT轉換由EI做
模型的問題,沒有對XYZ軸做處理,每次放身上的角度不一樣結果不一樣
一般陀螺儀6軸有速度跟加速度
不要仰賴地磁,例如在高鐵內有加速度的環境都不能用
(YChao: 慣性偵測元件IMU有分成:三軸加速度計、六軸加速度+陀螺儀/角速度計、九軸加速度+陀螺儀+地磁計
三軸只能處理靜態與簡單的動態,六軸有比較好的動態解析但是有漂移問題,九軸可以絕對定位但易受環境磁性影響)
20:24:44 伴伴學 Elton : Is edge impulse where machine learning takes place?
20:25:53 Accomdemy(Zhu Qi) : yes
20:30:20 伴伴學 Elton : The results can be used on mobile phones too, not? (yes)
20:30:23 Salman Faris : https://makergram.com/blog/play-chromes-dino-game-physically/
20:31:08 伴伴學 Elton : Ok understood thanks
20:38:11 伴伴學 Elton : The freq has to march the devices spec? (yes)
20:41:06 伴伴學 Elton : Yes tks
20:45:14 伴伴學 Elton : I take the device has to register its sensors with edge impulse. If I make my own device, will I be able to register my sensors with edge impulse?
20:45:30 伴伴學 Elton : I take it that ...
20:47:35 Salman Faris : https://docs.edgeimpulse.com/docs/porting-guide
20:47:56 伴伴學 Elton : Tks
20:51:26 伴伴學 Elton : I suppose this is an app of reinforcement learning that you will also need the game play responses from chrome Dino for it to complete the learning?
20:51:56 (Accomdemy) Paul Hsu : Good question
20:52:19 (Accomdemy) Paul Hsu : do you mean collecting real time streaming data from the game?
20:53:06 伴伴學 Elton : I suppose you can just shake it up and down with your hand for demo purpose
20:53:25 [伴]松凌(Joe Hsieh) : Data graph have 2-3 color lines. It depend on model type?
20:53:57 Will Who : That's the way they "cheat" Wii sport games
20:54:07 伴伴學 Elton : :) thought u could take it easy
21:03:37 伴伴學 Elton : Do the 33 features correspond to various value point of the sensor and by x y z?
21:04:27 伴伴學 Elton : oktks
21:04:41 Accomdemy(Zhu Qi) : it's a combination of data collected from xyz axis