# 99. TinyML exercises with EdgeImpulse
###### tags: `corsounipd2022`
This section proposes a few exercises to have hands-on experience related to using a tool like EdgeImpulse to design TinyML applications. As end device, we will use a smartphone.
To get started all you need is an [Edge Impulse account](https://studio.edgeimpulse.com/) and a modern smartphone.
## Adding sight to your sensors
> https://docs.edgeimpulse.com/docs/image-classification
In this tutorial we'll build a model that can distinguish between two objects - we've used number 1 and number 5, but feel free to pick two other objects.
### Setting up the project
You first have to create a project by
![](https://i.imgur.com/k2VIDQO.png =200x)
clicking on `Create new project`.
You will have to assign a name and decide what type of data are you dealing with.
![](https://i.imgur.com/0NOgbLK.png =400x)
In this case we use images, and `Classify a single object`:
![](https://i.imgur.com/lbCUkCK.png =400x)
As device we want to use our smartphone so we will have to select `Connect a development board`:
![](https://i.imgur.com/Wwy3gwD.png =300x)
We go now to `Devices` and click on `+ Connect a new device`:
![](https://i.imgur.com/hRTPxhU.png =400x)
and choose `Use your mobile phone`:
![](https://i.imgur.com/zKaD0aA.png)
you'll read the QR code that will appear in the screen and get:
![](https://i.imgur.com/101a0KW.png =300x)
Do not collect images yet; in the Devices section you will eventually get something like this:
![](https://i.imgur.com/9gsS6gW.png =400x)
### Building a dataset
To make your machine learning model see it's important that you capture a lot of example images of these objects. When training the model these example images are used to let the model distinguish between them.
In this tutorial we'll build a model that can distinguish between two objects - we've used number 1 and number 5, but feel free to pick two other objects.
Capture the following amount of data - make sure you capture a wide variety of angles and zoom levels:
* ... images of number `1`.
* ... images of number `5`.
* ... images of neither number `1` nor number `5` - make sure to capture a wide variation of random objects.
The indication `...` means... as many as possible :smile: More details on how to create the dataset [can be found here](https://docs.edgeimpulse.com/docs/image-classification-mobile-phone)
So go to the section `Data acquisition` and click on "Let's collect some data":
![](https://i.imgur.com/2gFJqgm.png)
You'll will have to select you phone again and start taking pictures and labelling each picture.
Afterwards you should have a dataset listed under Data acquisition in your Edge Impulse project. You can switch between your training and testing data with the two buttons above the 'Data collected' widget.
![](https://i.imgur.com/GxGlNoT.jpg)
### Designing an impulse
With the training set in place you can design an impulse. An impulse takes the raw data, adjusts the image size, uses a preprocessing block to manipulate the image, and then uses a learning block to classify new data. Preprocessing blocks always return the same values for the same input (e.g. convert a color image into a grayscale one), while learning blocks learn from past experiences.
For this tutorial we'll use the 'Images' preprocessing block. This block takes in the color image, optionally makes the image grayscale, and then turns the data into a features array. If you want to do more interesting preprocessing steps - like finding faces in a photo before feeding the image into the network -, see the [Building custom processing blocks tutorial](https://docs.edgeimpulse.com/docs/custom-blocks). Then we'll use a 'Transfer Learning' learning block, which takes all the images in and learns to distinguish between the three (number 1, number 5, 'other') classes.
In the studio go to Create impulse, set the image width and image height to 96, and add the 'Images' and 'Transfer Learning (Images)' blocks. Then click Save impulse.
![](https://i.imgur.com/TEdvKg8.jpg)
### Configuring the processing block
To configure your processing block, click `Images` in the menu on the left. This will show you the raw data on top of the screen (you can select other files via the drop down menu), and the results of the processing step on the right. You can use the options to switch between 'RGB' and 'Grayscale' mode, but for now leave the color depth on 'RGB' and click Save parameters.
![](https://i.imgur.com/d0D072x.jpg)
This will send you to the 'Feature generation' screen. In here you'll:
* Resize all the data.
* Apply the processing block on all this data.
* Create a 3D visualization of your complete dataset.
Click **Generate features** to start the process.
Afterwards the 'Feature explorer' will load. This is a plot of all the data in your dataset. Because images have a lot of dimensions (here: 96x96x3=27,648 features) we run a process called 'dimensionality reduction' on the dataset before visualizing this. Here the 27,648 features are compressed down to just 3, and then clustered based on similarity. Even though we have little data you can already see some clusters forming ..., and can click on the dots to see which image belongs to which dot.
![](https://i.imgur.com/qKPzez1.jpg)
### Configuring the transfer learning model
With all data processed it's time to start training a neural network. The network that we're training here will take the image data as an input, and try to map this to one of the three classes.
It's very hard to build a good working computer vision model from scratch, as you need a wide variety of input data to make the model generalize well, and training such models can take days on a GPU. To make this easier and faster we are using transfer learning. This lets you piggyback on a well-trained model, only retraining the upper layers of a neural network, leading to much more reliable models that train in a fraction of the time and work with substantially smaller datasets.
To configure the transfer learning model, click Transfer learning in the menu on the left. Here you can select the base model (the one selected by default will work, but you can change this based on your size requirements), optionally enable data augmentation (images are randomly manipulated to make the model perform better in the real world), and the rate at which the network learns.
Set:
* Number of training cycles to 20.
* Learning rate to 0.0005.
* Data augmentation: enabled.
<!--
* Minimum confidence rating: 0.7.
-->
And click **Start training**. After the model is done you'll see accuracy numbers, a confusion matrix and some predicted on-device performance on the bottom. You have now trained your model!
![](https://i.imgur.com/USxXjsI.png)
### Validating your model
With the model trained let's try it out on some test data. When collecting the data we split the data up between a training and a testing dataset. The model was trained only on the training data, and thus we can use the data in the testing dataset to validate how well the model will work in the real world. This will help us ensure the model has not learned to overfit the training data, which is a common occurrence.
To validate your model, go to Model testing, select the checkbox next to 'Sample name' and click Classify selected, or "Classify all". Here we hit 100% accuracy, which is great for a model with so little data.
To see a classification in detail, click the three dots next to an item, and select Show classification. This brings you to the Live classification screen with much more details on the file (if you collected data with your mobile phone you can also capture new testing data directly from here). This screen can help you determine why items were misclassified.
![](https://i.imgur.com/H827j6x.png)
### Running the model on your device
With the impulse designed, trained and verified you can deploy this model back to your device. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package up the complete impulse - including the preprocessing steps, neural network weights, and classification code - in a single C++ library that you can include in your embedded software.
Since we are using a mobile phone you just have to click **Switch to classification mode** at the bottom of your phone screen.
![](https://i.imgur.com/Y3kuTPf.png =300x)
![](https://i.imgur.com/CvUj4Mf.png =300x)
![](https://i.imgur.com/RamXlKr.png =300x)
For other boards or to **get the QR for the smartphone**: click on Deployment in the menu. Then under 'Build firmware' select your development board, and click Build. This will export the impulse, and build a binary that will run on your development board in a single step. After building is completed you'll get prompted to download a binary. Save this on your computer.
## More exercises
A lot of other things can be done. Check for example this link:
https://tinyml.seas.harvard.edu/CRESTLEX3/schedule/3/creating/