# Get Started: Learning Basics about uTensor Welcome! We are excited to have you here. This is the first chapter of our tutorial series for new contributors. As you go through the tutorial, you will learn how this project works and get familiar with the basics of contributing to uTensor. test [TOC] - How to build - Testing / CI - Tensor Interface & Tensor - Operators - Memory Management ## What is uTensor? uTensor is a lightweight machine learning inference framework built on TensorFlow and optimized for Arm targets. It consists of a runtime library and an offline tool that handles most of the model translation work. The core runtime is remarkably compact, with only ~2KB! See more info from the project's page: [uTensor: TinyML AI inference library](https://github.com/uTensor/uTensor) ### Basic uTensor Workflow Given models trained in modern deep learning frameworks like Keras, TensorFlow, or Pytorch, you can easily convert them via the uTensor code-gen into a .cpp file and a .hpp file. These files contain the generated C++11 code required for running inferences on the model. Working with uTensor on the embedded side is straightforward—copy and paste these generated files into your embedded project, and you're ready to go. This streamlined workflow makes deploying complex machine-learning models on resource-constrained devices easy. uTensor aims to bring efficient machine learning inference to embedded systems by providing a minimalistic yet powerful framework. Some key ideas are: - Lightweight runtime suitable for constrained environments. - Low total power for efficiency - Low static and dynamic footprint - Simple workflow for converting and deploying models on embedded devices. - Runtime memory usage guaranteed within limits at code-gen or compile time. - Clear, Concise, and Debuggable - Higher level language behavior with C++ speed. - Specialized operators access raw data blocks for speed. - Easy-to-use API for adding custom functionality. ### Main Components 1. [**uTensor Runtime**](https://github.com/uTensor/uTensor/tree/master): Handles inference on the embedded device. - Core: Basic data structures, interfaces, and types. - Library: Default implementations built on top of the uTensor Core. 2. [**uTensor Code Generator (cgen)**](https://github.com/uTensor/utensor_cgen): Converts trained TensorFlow models into C++ code for inference. ## Quick Build & Run ```bash! git clone git@github.com:uTensor/uTensor.git cd uTensor/ git checkout desktop git submodule init git submodule update mkdir build cd build/ cmake -DPACKAGE_TESTS=ON .. make make test ``` If you see something like this, then you are good to go. If not, don't panic; we will talk more about building in the next chapter. ```text # 100% tests passed, 0 tests failed out of 351 # Total Test time (real) = 6.07 sec ``` --- #### Next - [How to build](https://)