nctu-cas lab
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Owners
        • Signed-in users
        • Everyone
        Owners Signed-in users Everyone
      • Write
        • Owners
        • Signed-in users
        • Everyone
        Owners Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note No publishing access yet

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.

      Your account was recently created. Publishing will be available soon, allowing you to share notes on your public page and in search results.

      Your team account was recently created. Publishing will be available soon, allowing you to share notes on your public page and in search results.

      Explore these features while you wait
      Complete general settings
      Bookmark and like published notes
      Write a few more notes
      Complete general settings
      Write a few more notes
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Help
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Owners
  • Owners
  • Signed-in users
  • Everyone
Owners Signed-in users Everyone
Write
Owners
  • Owners
  • Signed-in users
  • Everyone
Owners Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note No publishing access yet

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.

    Your account was recently created. Publishing will be available soon, allowing you to share notes on your public page and in search results.

    Your team account was recently created. Publishing will be available soon, allowing you to share notes on your public page and in search results.

    Explore these features while you wait
    Complete general settings
    Bookmark and like published notes
    Write a few more notes
    Complete general settings
    Write a few more notes
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # MCUNet: Tiny Deep Learning on IoT Devices ###### tags:`TinyML` ###### source: NeurIPS 2020 ###### paper: [link](https://arxiv.org/pdf/2007.10319.pdf) ###### slides: [link](https://hanlab.mit.edu/projects/tinyml/mcunet/assets/MCUNet-slides.pdf) ###### code: [link](https://github.com/mit-han-lab/tinyml) ## Introduction They proposed MCUNet, a system-model co-design framework that enables ImageNet-scale deep learning on off-the-shelf microcontrollers. To handle the scarce on-chip memory on microcontrollers, they jointly optimize the deep learning model design and the inference library to reduce the memory usage. ## Background Existing frameworks such as TensorFlow Lite Micro, CMSIS-NN, CMix-NN, and MicroTVM have several limitations: 1. Most frameworks rely on an interpreter to interpret the network graph at runtime, which will consume a lot of SRAM and Flash and increase latency. 2. The optimization is performed at layer-level, which fails to utilize the overall network architecture information to further reduce memory usage. ## MCUNet: System-Algorithm Co-Design ![](https://i.imgur.com/RIPAKcA.png) - Traditional 1. Optimizing the neural network by using NAS. Figure(a) 2. Tuning the library to maximize the inference speed for a given network (TVM). Figure (b) - This paper **Jointly optimizes the NN architecture by TinyNAS abd the inference scheduling by TinyEngine in a same loop** Figure (c) ### TinyNAS (optimize the NN architecture): Two-stage NAS for Tiny Memory Constraints #### Automated search space optimization ![](https://i.imgur.com/hhVFs3C.png) - TinyNAS generates different search spaces by scaling the input resolution and the model width. - Revisit ProxylessNAS search space: - S = kernel size * expansion ratio * depth - Extended search space to cover wide range of hardware capacity - S' = kernel size * expansion ration * depth * input resolution R * width multiplier W - TinyNAS selects the best search space by analyzing the FLOPs CDF of different search spaces. They thought the design space that is more likely to produce high FLOPs models under the memory constraint gives higher model capacity, thus more likely to achieve high accuracy. #### Resource-constrained model specialization - After optimizaing search space, they perform one-shot neural architecture search. They train one super network that contains all the possible sub-networks, and then they perform evolution search to find the best model that meets the on-board resource constraints while achieving the highest accuracy. - For each sampled network, they use TinyEngine to optimize the memory scheduling to measure the optimal memory usage. ### TinyEngine (optimize inference scheduling): A Memory-Efficient Inference Library #### From interpretation to code generation ###### TinyEngine - TinyEngine offloads these operations from runtime to compile time, and only generates the code that will be executed by the TinyNAS model. - It only compiles the operations that are used by a given model into binary. - We have full control over what model to run, and the generated code is fully specialized for TinyNAS models. ###### TF-Lite Micro - Most existing inference libraries like TF-Lite are interpreter-based. This interpreter-based libraries is easy to support cross-platform development but it requires extra runtime memory. - It prepares the code for every operation (e.g. conv, softmax) to support cross-model inference even if they are not used, which has high redundancy. #### Model-adaptive memory scheduling ![](https://i.imgur.com/zako3hP.png) - The maximum memory M required to fit exactly one column of transformed inputs over all the layers ***L*** ![](https://i.imgur.com/6pCsDc6.png) - For each layer L[j], TinyEngine tries to tile the computation loop nests so that, as many columns can fit in that memory as possible - Such adaption fully uses the availbale memory and increases the input data reuse, reducing the runtime overheads including the memory fragmentation and data movement. #### Computation kernel specialization - TinyEngine specializes the kernel optimizations for different layers. - Loop tiling is based on the kernel size and available memory, which is different layer. - Loop unrolling is specialized for different kernel sizes. e.g. 9 repeated code segments for 3 X 3 kernel, and 25 for 5 X 5. -> To eliminate the branch instruction overheads. - Operation fusion is performed for Conv+Padding+ReLU+BN layers. (fuse Pad+Conv+ReLU+BN) #### In-place depth-wise convolution ![](https://i.imgur.com/sVSIfCv.png) - (a) Conventional depth-wise convolution requires 2N memory footprint for activations. (b) The output activation of the first channel is stored in a temporary buffer. Then, for each following channel, the output activation overwrites the input activation of its previous channel. Finally the output activation of the first channel stored in the buffer is written back to the input activation of the last channel. - The input activation of the channel can be overwritten and used to store ouput activation of another channel. ![](https://i.imgur.com/38pAe0c.png) ## Experiments #### Model depolyment - They perform int8 linear quantization to deploy the model. ### Large-Scale Image Recognition on Tiny Devices #### Co-design brings better performance ![](https://i.imgur.com/FxT8wET.png) - When running on a tight budget of 320KB SRAM and 1MB Flash, the optimal scaling of MobileNetV2 and ProxylessNAS models only achieve 35.2% and 49.5% top-1 accuracy on ImageNet using CMSIS-NN. - With TinyEnigine, we can fit larger models that achieve higher accuracy of 47.4% and 56.4%, we can specialize a more accurate model under tight memory caonstraints to achieve 55.5% top-1 accuracy. - With TinyNAS and TinyEngine, MCUNet further advances the accuracy to 61.8%, showing the advantage of joint optimization. ![](https://i.imgur.com/7DdYtTy.png) - Co-design improves the performance at various latency constraints. TinyEngine accelerates inference to achieve higher accuracy at the same latency constraints. #### Diverse hardware constraints & lower bit precision ![](https://i.imgur.com/QddGWUT.png) - They used int8 linear quantization for both weights and avtivations, as it is the eindustrial standard for faster inference and usually has negligible accuracy loss without fine-tuning. - They also performed 4-bit linear quantization on ImageNet, which can fit larger number parameters. ![](https://i.imgur.com/uA4v5WD.png) - Compared to ResNet-18 and MobileNetV2-0.75(both in 8-bit), which achieve a similar ImageNet accuracy, MCUNet reduces the memory usage by 3.5x and the Flash usage by 5.7x to fit the tiny memory size on microcontrollers. ### Visual&Audio Wake Words ![](https://i.imgur.com/dElclPW.png) - They benchmarked the performance on two wake words datasetsL Visual Wake Words (VWW) and Google Speech Commands (GSC) to compare the accuracy-latency and accuracy-peak memory trade-off. They compared to the optimally scaled MobileNetV2 and ProxylessNAS running on TF-Lite Micro. They also compared to the previous first-place solution on VWW challenge. They scaled the input resolution to tightly fit the memory constraints of 320KB and re-trained it under the same setting like theirs. - They found that MCUNet achieves 2.4x fatster inference speed compared to the previous state-of-the-art. Interestingly, the previous model has a mucj smaller peak memory usage compared to the biggest MobileNetV2 and ProxylessNas model, while having a higher computation latency. It also shows that a smaller peak memory is the key to success on microcontrollers. - On the Speech Commands dataset, MCUNet achieves a higher accuracy at 2.8× faster inference speed and 4.1× smaller peak memory. It achieves 2% higher accuracy compared to the largest MobileNetV2, and 3.3% improvement compared to the largest runnable ProxylessNAS under 256kB SRAM constraint. ### Object Detection on MCUs ![](https://i.imgur.com/lJ8bFbc.png) - MCUNet improves the detection mAP by 20% on Pascal VOC under 512kB SRAM constraint. WithMCUNet, we are able to fit a model with much larger capacity and computation FLOPs at a smaller peak memory. - MobileNet-v2 + CMSIS-NN is bounded by the memory consumption: it can only fit a model with 34M FLOPs even when the peak memory slightly exceeds the budget, leading to inferior detection performance - They used YOLOv2 as the detector; other more advanced detectors like YOLOv3 use multi-scale feature maps to generate the final prediction, which has to keep intermediate activations in the SRAM, increasing the peak memory by a large margin. ### Analysis #### Search space optimization matters ![](https://i.imgur.com/Rlh54Qu.png) - They sample several search spaces from the top-10 search spaces and perform the whole neural architecture search process to find the best model inside the space that can fit 320kB SRAM/1MB Flash. - Huge space support variable resolution and variable width multipliers, and it contains the best space, but it fails to get good performance. ![](https://i.imgur.com/B9sOJNu.png) - Search space with higher mean FLOPs lwads to higher final accuracy #### Per-block peak memory analysis ![](https://i.imgur.com/4Xpm5B3.png) They plot the per block activation size (not including other runtime buffers) of the first two-stages, which have the biggest activation size as the memory bottleneck. - MobileNetV2 has a highly imbalanced peak activation size. To scale down the network and fit the SRAM constraints, other blocks are forced to scale to a very small capacity. - MCUNet searched by TinyNAS has more a balanced peak memory size, leading to a overall higher network capacity. The memory allocation is automatically discovered when optimizing the accuracy/memory trade-off by TinyNAS, without human heuristics on the memory distribution. #### Sensitivity analysis on search space optimization ![](https://i.imgur.com/vMxTqT9.png) - Generally, with a larger SRAM to store a larger activation map, we can use a higher input resolution; with a larger Flash to store a larger model, we can use a larger width multiplier. - When we increase the SRAM and keep the Flash from point 1 to point 2, the width is not increased as Flash is small; the resolution increases as the larger SRAM can host a larger activation. - From point 1 to 3, the width increases, and the resolution actually decreases because a larger Flash hosts a wider model, but we need to scale down the resolution to fit the small SRAM. #### Evolution search ![](https://i.imgur.com/PpKqebB.png) The solid line represents the average value, while the shadow shows the range of (min, max) accuracy. - On TinyEngine, evolution clearly outperforms random search, with 1% higher best accuracy. - The evolution on CMSIS-NN leads to much worse results due to memory inefficiency because the library can only host a smaller model compared to TinyEngine, which leads to lower accuracy. ## Conclusion 1. They propose MCUNet to jointly design the neural network architecture and the inference library, make memory usage efficient. 2. It seemed that MCUNet did not support multiple model on the same system. ## The comparison with TF-Lite Micro 1. TF-Lite Micro supports multiple models on the same embedded system. 2. Implementing operation fusion (when converting TensorFlow into TensorFlow Lite)[source]( https://www.tensorflow.org/lite/convert/operation_fusion) >1. Loop through all functions in the MLIR module. >2. If a function has the tf._implements attribute, based on the attribute value, >calls the appropriate operation fusion utility. >3. The operation fusion utility operates on the function’s operands and >attributes (which serve as the interface for the conversion) and replaces the >body of the function with an equivalent function body containing the fused >operation. >4. In many cases, the replaced body will contain operations other than the fused operation. These correspond to some static transforms on the function’s >operands in order to obtain the operands of the fused operation. Since these >computations can all be constant folded away, they would not be present in >the exported flatbuffer where only the fused operation would exist.

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password
    or
    Sign in via Facebook Sign in via X(Twitter) Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    By signing in, you agree to our terms of service.

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully