Try   HackMD

Problem

  • Design a CFU for MLPerf™ Tiny image classification benchmark model and targeting on decreasing latency.
  • Your design will be benchmarked by the MLPerf™ Tiny Benchmark Framework. Here is its Github page for detailed information aboud MLPerf™ Tiny.

Selected model

  • MLPerf™ Tiny Image Classification Benchmark Model is a tiny version of ResNet.
    • It consists of Conv2D, Add, AvgPool2D, FC, and Softmax.
  • You don't need to itegrate the model your self. The model is already included in CFU.
    • See ${CFU_ROOT}/common/src/models/mlcommons_tiny_v01/imgc/
  • You can inspect the architecture of the selected model with Netron.
    • Upload the model and you will see a vivid computation graph containing infomation of operators, tensors, and dependency between each objects.
    • It might provide you some inspiration for your design.

Setup

  • Clone this fork of CFU to get the final project template
    • Final project template path: ${CFU_ROOT}/proj/AAML_final_proj
  • Accuracy and Latency are evaluated by the provided evaluation script
    • Script path: ${CFU_ROOT}/proj/AAML_final_proj/eval_script.py
    • Dependency:
      ​​​​​​​​pip install pyserial tdqm

Requirement

  • Files that you can modify
    • Kernel API
      1. tensorflow/lite/micro/kernels/add.cc
      2. tensorflow/lite/micro/kernels/conv.cc
      3. tensorflow/lite/micro/kernels/fully_connected.cc
    • Kernel implementation
      1. tensorflow/lite/kernels/internal/reference/integer_ops/add.h
      2. tensorflow/lite/kernels/internal/reference/integer_ops/conv.h
      3. tensorflow/lite/kernels/internal/reference/integer_ops/fully_connected.h
    • HW design
      1. cfu.v

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →
No other src code in ${CFU_ROOT}/common/** and ${CFU_ROOT}/third_party/** should be overriden unless asking for permission.

  • Your design should pass the golden test
    • After make prog && make load, input 11g to run golden test of MLPerf Tiny imgc model
      • Make sure you are running imgc's golden test if multiple models are included
      • Gold test passed if you see this:
        • Image Not Showing Possible Reasons
          • The image was uploaded to a note which you don't have access to
          • The note which the image was originally uploaded to has been deleted
          Learn More →
  • You can modify the architecture or the parameters of the selected model
    • The classification accuracy of your design should be evaluated
    • run python eval_script.py in ${CFU_ROOT}/proj/AAML_final_proj
      • --port {tty_path:-/dev/ttyUSB1}: Add this argument to select correct serial port
  • Improve the performance of your design to decrease the latency as low as it could be
  • Accuracy and Latency are evaluated by the provided evaluation script
    • Usage:
      • make prog && make load > reboot litex > turn off litex-term > run eval script
    • Image Not Showing Possible Reasons
      • The image was uploaded to a note which you don't have access to
      • The note which the image was originally uploaded to has been deleted
      Learn More →

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →
If you just want to know the latency of your design, it would be easier to run a test input instead of whole process of evaluation.

Presentation

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →
You will receive 0 point if you don't present your work

  • 30%
  • You should give a presentation in the last class of this semester
  • Each team has 5 minutes to present at most
  • Your presentation should contains
    • The introduction of your design
      • SW
      • HW
    • (Optional) The implementation of your design
      • SW
      • HW
    • The evaluation of your design
      • Accuracy (if you modify the selected model)
      • Latency

Grading Policy

  • We will compare the performance of your design with our reference design which is a implementation of HW2 and will not be released.
  • ACC won't be test if you don't modify the model
  • LATTA154M cycles2036000 μs
  • All
    ACCXX
    and
    LATXX
    are measured by the provided evaluation script
  • Ranking will be released with everyone's evaluation result after the deadline.

Grading formula

  • Accuracy:
    GOLD={1 if golden test passed,0 if golden test failed

    ACC=Min(ACCstudentACCori, 100%)

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →
Note that better ACC won't give you better score!!

  • Latency:
    LATbase=Min(80×LATTALATstudent, 80)LATrank=Min(20×#studentsRankstudent#students, 20)         where Rankstudent[0,#students1]
  • Presentation
    Present={30 if you submit a plain impl of lab2 with the same performance as TAs,0 otherwise
  • Final score:
    Score=GOLD×ACC×(LATbase+LATrank) + Present(Highest score=1×100%×(80+20)0=100)

Submission

  • Please fork my repo and push your work to it
    • If you use your own model
      • Put pretrained model under ${CFU_ROOT}/proj/AAML_final_proj or somewhere else we can easily find it
      • Send us the link to your training/optimization script (Github repo or GoogleDrive ) via email (yyliu.cs11@nycu.edu.tw)
        • Or you can put them in your final project repo and leave a message about where to find them in the README.md under your CFU project direcrtory (this file)
  • Put the link of your fork and your presentation slides to this spreadsheet
  • Grading workflow will be:
    1. Clone your fork
    2. Apply your custom model if needed
    3. make prog && make load
    4. Run golden test
    5. Run evaluation script
    6. Record measurements