# TinyML hands-on examples: Edge Impulse
Edge Impulse is the leading development platform for machine learning on edge devices. It is for developers but also used by enterprises.
[![](https://i.imgur.com/fmPZ6K4.jpg)](https://www.edgeimpulse.com)
It allows to cover all the phases in a ML base IoT project, from training to deployment:
[![](https://i.imgur.com/HFpOtE4.jpg)](https://studio.edgeimpulse.com/login)
Currently it can handle the [following hardware:](https://docs.edgeimpulse.com/docs/fully-supported-development-boards)
![](https://i.imgur.com/9HUgdaR.png =300x)
With different development boards, data can be collected using the [Data forwarder](https://docs.edgeimpulse.com/docs/cli-data-forwarder) or the [Edge Impulse for Linux SDK](https://docs.edgeimpulse.com/docs/edge-impulse-for-linux), and deploy the model back to the device with the [Running your impulse locally tutorials](https://docs.edgeimpulse.com/docs/running-your-impulse-locally-1). **You can also use your [Mobile phone.](https://docs.edgeimpulse.com/docs/using-your-mobile-phone)**
:::info
Click [here :mortar_board: to create an edge impulse account](https://studio.edgeimpulse.com/signup). Many of the CLI tools require the user to log in to connect with the Edge Impulse Studio.
:::
## Connecting devices to Edge Impulse
### Edge Impulse CLI Installation
The Edge Impulse CLI is used to control local devices, act as a proxy to synchronise data for devices that don't have an internet connection, and to upload and convert local files. The CLI consists of seven tools, the most important are:
* `edge-impulse-daemon` - configures devices over serial, and acts as a proxy for devices that do not have an IP connection.
* `edge-impulse-uploader` - allows uploading and signing local files.
* `edge-impulse-data-forwarder` - a very easy way to collect data from any device over a serial connection, and forward the data to Edge Impulse.
* `edge-impulse-run-impulse` - show the impulse running on your device.
Installation instructions are available here: https://docs.edgeimpulse.com/docs/cli-installation
:::success
Anyway, recent versions of Google Chrome and Microsoft Edge can connect directly to fully-supported development boards, without the CLI. More later...
:::
### Ingestion service
> https://docs.edgeimpulse.com/reference/ingestion-api
The ingestion service is used to send new device data to Edge Impulse. It's available on both HTTP and HTTPS endpoints, and requires an API key to authenticate. Data needs to be sent in the [Edge Impulse Data Acquisition](https://docs.edgeimpulse.com/docs/data-acquisition-format) format, and is optionally signed with an HMAC key. Data with invalid signatures will still show up in the studio, but will be marked as such, and can be excluded from training.
## An example with a "Wio Terminal"
> https://www.seeedstudio.com/Wio-Terminal-p-4509.html
The Wio Terminal is an ATSAMD51-based microcontroller with both Bluetooth and Wi-Fi connectivity powered by Realtek RTL8720DN.
![](https://i.imgur.com/mF02b7l.png)
The SAM D51 micro-controller series is targeted for general purpose applications using the 32-bit **ARM® Cortex®-M4 processor** with Floating Point Unit (FPU), running up to 120 MHz, up to 1 MB Dual Panel Flash with ECC, and up to 256 KB of SRAM with ECC. The Wio Terminal is integrated with a 2.4” LCD Screen, onboard IMU (LIS3DHTR), microphone, buzzer, microSD card slot, light sensor, infrared emitter (IR 940nm).
It is compatible with Arduino and MicroPython, but currently wireless connectivity is only supported by Arduino.
### Connecting to Edge Impulse
> For more details, [please also see here.](https://wiki.seeedstudio.com/Wio-Terminal-TinyML-EI-1/)
Connect the Wio Terminal to your computer. Entering the bootloader mode by sliding the power switch twice quickly.
An external drive named Arduino should appear in your PC. Drag the the downloaded [Edge Impulse uf2 firmware files](http://files.seeedstudio.com/wiki/Wio-Terminal-Edge-Impulse/res/EdgeImpulse.uf2) to the Arduino drive. Now, Edge Impulse is loaded on the Wio Terminal!
From a command prompt or terminal, run:
```
edge-impulse-daemon
```
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean.
![](https://i.imgur.com/HQvP9N8.png)
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
![](https://i.imgur.com/5EKvU2s.jpg)
:::info
With Chrome these step can be avoided and use WebUSB:
![](https://i.imgur.com/TtFrSWN.png)
> see https://www.edgeimpulse.com/blog/collect-sensor-data-straight-from-your-web-browser
:::
## Continuous motion recognition
> https://docs.edgeimpulse.com/docs/continuous-motion-recognition
> https://wiki.seeedstudio.com/Wio-Terminal-TinyML-EI-2/
In this tutorial, you'll use machine learning to build a gesture recognition system that runs on a microcontroller. This is a hard task to solve using rule based programming, as people don't perform gestures in the exact same way every time. But machine learning can handle these variations with ease. You'll learn how to collect high-frequency data from real sensors, use signal processing to clean up data, build a neural network classifier, and how to deploy your model back to a device. At the end of this tutorial you'll have a firm understanding of applying machine learning in embedded devices using Edge Impulse.
### Collecting your first data
With your device connected we can collect some data. In the studio go to the **Data acquisition tab**. This is the place where all your raw data is stored, and - if your device is connected to the remote management API - where you can start sampling new data.
Under **Record new data**, select your device, set the label to `updown`, the sample length to 10000, the sensor to Built-in accelerometer and the frequency to 62.5Hz. This indicates that you want to record data for 10 seconds, and label the recorded data as `updown`. You can later edit these labels if needed.
![](https://i.imgur.com/1lIW3L9.png)
After you click Start sampling move your device up and down in a continuous motion. In about twelve seconds the device should complete sampling and upload the file back to Edge Impulse. You see a new line appear under 'Collected data' in the studio. When you click it you now see the raw data graphed out. As the accelerometer on the development board has three axes you'll notice three different lines, one for each axis.
You'll get a graph like this:
![](https://i.imgur.com/lK5yF7q.png)
Machine learning works best with lots of data, so a single sample won't cut it. Now is the time to start building your own dataset. For example, use the following four classes, and record around 3 minutes of data per class:
* Idle - just sitting on your desk while you're working.
* Lateral - moving the device left and right.
* Updown - moving the device up and down.
Up to get to this point:
![](https://i.imgur.com/1sOg1wW.jpg)
The tool warn us that we have too few data... but it is an example...
![](https://i.imgur.com/MGIrOEB.png)
### Designing an Impulse
With the training set in place you can design an **impulse**. An impulse takes the raw data, slices it up in smaller windows, uses signal processing blocks to extract features, and then uses a learning block to classify new data. Signal processing blocks always return the same values for the same input and are used to make raw data easier to process, while learning blocks learn from past experiences.
For this example we'll use the '**Spectral analysis**' signal processing block. This block applies a filter, performs spectral analysis on the signal, and extracts frequency and spectral power data. Then we'll use a 'Neural Network' learning block, that takes these spectral features and learns to distinguish between the three (idle, lateral, updown) classes.
Go to Create impulse, set the window size to 2000 (you can click on the 2000 ms. text to enter an exact value), the window increase to 80, and add the 'Spectral Analysis' and 'Classification (Keras)' blocks. Then click Save impulse.
![](https://i.imgur.com/op7oqTN.jpg)
### Configuring the spectral analysis block
To configure your signal processing block, click **Spectral features** in the menu on the left. This will show you the raw data on top of the screen (you can select other files via the drop down menu), and the results of the signal processing through graphs on the right. For the spectral features block you'll see the following graphs:
* After filter - the signal after applying a low-pass filter. This will remove noise.
* Frequency domain - the frequency at which signal is repeating (e.g. making one wave movement per second will show a peak at 1 Hz).
* Spectral power - the amount of power that went into the signal at each frequency
![](https://i.imgur.com/UJ4E0vW.jpg)
A good signal processing block will yield similar results for similar data. If you move the sliding window (on the raw data graph) around, the graphs should remain similar. Also, when you switch to another file with the same label, you should see similar graphs, even if the orientation of the device was different.
Once you're happy with the result, click Save parameters. This will send you to the 'Generate features' screen. In here you'll:
* Split all raw data up in windows (based on the window size and the window increase).
* Apply the spectral features block on all these windows.
Click **Generate features** to start the process.
![](https://i.imgur.com/z9ZoWZk.jpg)
Afterwards the 'Feature explorer' will load. This is a plot of all the extracted features against all the generated windows. You can use this graph to compare your complete data set. E.g. by plotting the height of the first peak on the X-axis against the spectral power between 0.5 Hz and 1 Hz on the Y-axis.
**A good rule of thumb is that if you can visually separate the data on a number of axes, then the machine learning model will be able to do so as well.**
![](https://i.imgur.com/QHJ3WS6.jpg)
### Configuring the neural network
With all data processed it's time to start training a neural network. Neural networks are a set of algorithms that are designed to recognize patterns. The network that we're training here will take the signal processing data as an input, and try to map this to one of the four classes.
So how does a neural network know what to predict? A neural network consists of layers of neurons, all interconnected, and each connection has a weight. One such neuron in the input layer would be the height of the first peak of the X-axis (from the signal processing block); and one such neuron in the output layer would be wave (one the classes). When defining the neural network all these connections are intialized randomly, and thus the neural network will make random predictions. During training we then take all the raw data, ask the network to make a prediction, and then make tiny alterations to the weights depending on the outcome (this is why labeling raw data is important).
This way, after a lot of iterations, the neural network learns; and will eventually become much better at predicting new data. Let's try this out by clicking on NN Classifier in the menu.
Set **'Number of training cycles' to 1**. This will limit training to a single iteration. And then click Start training.
![](https://i.imgur.com/y6He6EI.jpg)
The figure shows the training performance after a single iteration. On the top a summary of the accuracy of the network, and in the middle a confusion matrix. This matrix shows when the network made correct and incorrect decisions. You see that lateral is relatively easy to predict.
Now change the 'Number of training cycles' up to 50... for example and you'll see performance go up. You've just trained your first neural network!
:::success
100% accuracy
You might end up with a 100% accuracy after training for 50 training cycles. This is not necessarily a good thing, as it might be a sign that the neural network is too tuned for the specific test set and might perform poorly on new data (overfitting). The best way to reduce this is by adding more data or reducing the learning rate.
:::
![](https://i.imgur.com/VnzDI5Q.jpg)
## Classifying new data
From the statistics in the previous step we know that the model works against our training data, but how well would the network perform on new data?
Click on **Live classification** in the menu to find out. Your device should (just like in step 2) show as online under 'Classify new data'. Set the 'Sample length' to 5000 (5 seconds), click Start sampling and start doing movements. Afterwards you'll get a full report on what the network thought that you did.
![](https://i.imgur.com/1HSPOau.jpg)
If the network performed great, fantastic! But what if it performed poorly? There could be a variety of reasons, but the most common ones are:
* There is not enough data. Neural networks need to learn patterns in data sets, and the more data the better.
* The data does not look like other data the network has seen before. This is common when someone uses the device in a way that you didn't add to the test set. You can add the current file to the test set by clicking ⋮, then selecting Move to training set. Make sure to update the label under 'Data acquisition' before training.
* The model has not been trained enough. Up the number of epochs to 200 and see if performance increases (the classified file is stored, and you can load it through 'Classify existing validation sample').
* The model is overfitting and thus performs poorly on new data. Try reducing the learning rate or add more data.
* The neural network architecture is not a great fit for your data. Play with the number of layers and neurons and see if performance improves.
**As you see there is still a lot of trial and error when building neural networks**, but we hope the visualizations help a lot. You can also run the network against the complete validation set through 'Model validation'. Think of the model validation page as a set of unit tests for your model!
---
## Deploying back to device
With the impulse designed, trained and verified you can deploy this model back to your device. This makes the model run without an Internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package up the complete impulse - including the signal processing code, neural network weights, and classification code - up in a single C++ library that you can include in your embedded software.
To export your model, click on "Deployment" in the left menu, select the proper "library", under 'Build firmware' select your development board, and click at the bottom of the page.
In our case, we just choose Arduino library.
![](https://i.imgur.com/L2FNzMA.jpg)
![](https://i.imgur.com/Xi64S6h.png)
Clicking the **Build** button will export the impulse and build a binary that will run on your development board in a single step. After building is completed you'll get prompted to download a binary:
![](https://i.imgur.com/i1YDYht.png)
Save this on your computer. In our case the generated file is:
> `ei-mlwithwio-arduino-1.0.1.zip`.
To deploy it to the Wio Terminal, you have to first:
1. add in the Arduino IDE the Wio Terminal board, see: https://wiki.seeedstudio.com/Wio-Terminal-Getting-Started/.
2. extract the `ei-mlwithwio-arduino-1.0.1.zip` archive and place it in the Arduino libraries folder.
Finally, open Arduino IDE and open the shared project `movements_sense`:
![](https://i.imgur.com/gfxgVKP.png)
and:
![](https://i.imgur.com/q8BPW1q.png)
![](https://i.imgur.com/rMYjBx4.png)
![](https://i.imgur.com/Z8JrbiN.png)
### How about using the smartphone
If we chose the "Smartphone" in the Run your impulse directly:
![](https://i.imgur.com/f7EuEC6.png)
we get this QR code that will replicate the application in the smartphone... give it a try!!
You should get something like this:
{%youtube Pu2vzFJihSk%}
---
---
# More examples with EdgeImpulse
This section proposes a few exercises to have hands-on experience related to using a tool like EdgeImpulse to design TinyML applications. As end device, we will use a smartphone.
To get started all you need is an [Edge Impulse account](https://studio.edgeimpulse.com/) and a modern smartphone.
## Adding sight to your sensors
> https://docs.edgeimpulse.com/docs/image-classification
In this tutorial we'll build a model that can distinguish between two objects - we've used number 1 and number 5, but feel free to pick two other objects.
### Setting up the project
You first have to create a project by
![](https://i.imgur.com/k2VIDQO.png =200x)
clicking on `Create new project`.
You will have to assign a name and decide what type of data are you dealing with.
![](https://i.imgur.com/0NOgbLK.png =400x)
In this case we use images, and `Classify a single object`:
![](https://i.imgur.com/lbCUkCK.png =400x)
As device we want to use our smartphone so we will have to select `Connect a development board`:
![](https://i.imgur.com/Wwy3gwD.png =300x)
We go now to `Devices` and click on `+ Connect a new device`:
![](https://i.imgur.com/hRTPxhU.png =400x)
and choose `Use your mobile phone`:
![](https://i.imgur.com/zKaD0aA.png)
you'll read the QR code that will appear in the screen and get:
![](https://i.imgur.com/101a0KW.png =300x)
Do not collect images yet; in the Devices section you will eventually get something like this:
![](https://i.imgur.com/9gsS6gW.png =400x)
### Building a dataset
To make your machine learning model see it's important that you capture a lot of example images of these objects. When training the model these example images are used to let the model distinguish between them.
In this tutorial we'll build a model that can distinguish between two objects - we've used number 1 and number 5, but feel free to pick two other objects.
Capture the following amount of data - make sure you capture a wide variety of angles and zoom levels:
* ... images of number `1`.
* ... images of number `5`.
* ... images of neither number `1` nor number `5` - make sure to capture a wide variation of random objects.
The indication `...` means... as many as possible :smile: More details on how to create the dataset [can be found here](https://docs.edgeimpulse.com/docs/image-classification-mobile-phone)
So go to the section `Data acquisition` and click on "Let's collect some data":
![](https://i.imgur.com/2gFJqgm.png)
You'll will have to select you phone again and start taking pictures and labelling each picture.
Afterwards you should have a dataset listed under Data acquisition in your Edge Impulse project. You can switch between your training and testing data with the two buttons above the 'Data collected' widget.
![](https://i.imgur.com/GxGlNoT.jpg)
### Designing an impulse
With the training set in place you can design an impulse. An impulse takes the raw data, adjusts the image size, uses a preprocessing block to manipulate the image, and then uses a learning block to classify new data. Preprocessing blocks always return the same values for the same input (e.g. convert a color image into a grayscale one), while learning blocks learn from past experiences.
For this tutorial we'll use the 'Images' preprocessing block. This block takes in the color image, optionally makes the image grayscale, and then turns the data into a features array. If you want to do more interesting preprocessing steps - like finding faces in a photo before feeding the image into the network -, see the [Building custom processing blocks tutorial](https://docs.edgeimpulse.com/docs/custom-blocks). Then we'll use a 'Transfer Learning' learning block, which takes all the images in and learns to distinguish between the three (number 1, number 5, 'other') classes.
In the studio go to Create impulse, set the image width and image height to 96, and add the 'Images' and 'Transfer Learning (Images)' blocks. Then click Save impulse.
![](https://i.imgur.com/TEdvKg8.jpg)
### Configuring the processing block
To configure your processing block, click `Images` in the menu on the left. This will show you the raw data on top of the screen (you can select other files via the drop down menu), and the results of the processing step on the right. You can use the options to switch between 'RGB' and 'Grayscale' mode, but for now leave the color depth on 'RGB' and click Save parameters.
![](https://i.imgur.com/d0D072x.jpg)
This will send you to the 'Feature generation' screen. In here you'll:
* Resize all the data.
* Apply the processing block on all this data.
* Create a 3D visualization of your complete dataset.
Click **Generate features** to start the process.
Afterwards the 'Feature explorer' will load. This is a plot of all the data in your dataset. Because images have a lot of dimensions (here: 96x96x3=27,648 features) we run a process called 'dimensionality reduction' on the dataset before visualizing this. Here the 27,648 features are compressed down to just 3, and then clustered based on similarity. Even though we have little data you can already see some clusters forming ..., and can click on the dots to see which image belongs to which dot.
![](https://i.imgur.com/qKPzez1.jpg)
### Configuring the transfer learning model
With all data processed it's time to start training a neural network. The network that we're training here will take the image data as an input, and try to map this to one of the three classes.
It's very hard to build a good working computer vision model from scratch, as you need a wide variety of input data to make the model generalize well, and training such models can take days on a GPU. To make this easier and faster we are using transfer learning. This lets you piggyback on a well-trained model, only retraining the upper layers of a neural network, leading to much more reliable models that train in a fraction of the time and work with substantially smaller datasets.
To configure the transfer learning model, click Transfer learning in the menu on the left. Here you can select the base model (the one selected by default will work, but you can change this based on your size requirements), optionally enable data augmentation (images are randomly manipulated to make the model perform better in the real world), and the rate at which the network learns.
Set:
* Number of training cycles to 20.
* Learning rate to 0.0005.
* Data augmentation: enabled.
<!--
* Minimum confidence rating: 0.7.
-->
And click **Start training**. After the model is done you'll see accuracy numbers, a confusion matrix and some predicted on-device performance on the bottom. You have now trained your model!
![](https://i.imgur.com/USxXjsI.png)
### Validating your model
With the model trained let's try it out on some test data. When collecting the data we split the data up between a training and a testing dataset. The model was trained only on the training data, and thus we can use the data in the testing dataset to validate how well the model will work in the real world. This will help us ensure the model has not learned to overfit the training data, which is a common occurrence.
To validate your model, go to Model testing, select the checkbox next to 'Sample name' and click Classify selected, or "Classify all". Here we hit 100% accuracy, which is great for a model with so little data.
To see a classification in detail, click the three dots next to an item, and select Show classification. This brings you to the Live classification screen with much more details on the file (if you collected data with your mobile phone you can also capture new testing data directly from here). This screen can help you determine why items were misclassified.
![](https://i.imgur.com/H827j6x.png)
### Running the model on your device
With the impulse designed, trained and verified you can deploy this model back to your device. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package up the complete impulse - including the preprocessing steps, neural network weights, and classification code - in a single C++ library that you can include in your embedded software.
Since we are using a mobile phone you just have to click **Switch to classification mode** at the bottom of your phone screen.
![](https://i.imgur.com/Y3kuTPf.png =300x)
![](https://i.imgur.com/CvUj4Mf.png =300x)
![](https://i.imgur.com/RamXlKr.png =300x)
For other boards or to **get the QR for the smartphone**: click on Deployment in the menu. Then under 'Build firmware' select your development board, and click Build. This will export the impulse, and build a binary that will run on your development board in a single step. After building is completed you'll get prompted to download a binary. Save this on your computer.
## More exercises
A lot of other things can be done. Check for example this link:
https://tinyml.seas.harvard.edu/CRESTLEX3/schedule/3/creating/