changed 4 years ago
Published Linked with GitHub

Edge Computing with CPU accelerator


Edge Computing


Why Edge Computing

  • Stability
    • self-driving
  • Inference speed
    • shared memory vs Http service
  • Low cost

ML platform

intel: OpenVINO
arm: armnn


Armnn - ML platform

Neon: advanced Single Instruction Multiple Data (SIMD)


Model inference with armnn on CPU

  1. Model selection
  2. Model transform
  3. Prepare depended library
  4. Cross compile with inference program
  5. Inference model

Model selection

  • inference speed and model size is the first priority
model size(MB) Top-5 Acc (%)
Inception-v2 44 95.22
ShuffleNet-v2 9.2 88.32
VGG16 527.9 91.21
ResNet-18 44.7 89.29

Model transform

Armnn support Caffe, TF, TFlite, Onnx

Transform model if you used other framework like: pytorch, mxnet


Prepare depended library

File size is always the big issue in Soc( System on a Chip )

Consider the library size when

  • Image preprocessing tool
    • opencv: Opencv is a Integrated library for compute vision, however it depended on lots of other libraries. e.g. ffmpeg
    • opencv size: ~200MB vs stb_image: ~500KB

Cross compile with inference program

  • Cross compile: compile code for multiple platforms from one development host.
  • Cross compile command:
    • gcc-arm-linux-gnueabi -o main main.c ...

Inference model


Inference speed

model framework neon inference time(s)
shuffleNet tflite - 0.82
shuffleNet armnn with neon 0.34
shuffleNet armnn without neon 86

Select a repo