Edge Computing with CPU accelerator
Edge Computing
Why Edge Computing
Stability
Inference speed
shared memory vs Http service
Low cost
ML platform
intel: OpenVINO
arm: armnn
Armnn - ML platform
Neon: advanced Single Instruction Multiple Data (SIMD)
Model inference with armnn on CPU
Model selection
Model transform
Prepare depended library
Cross compile with inference program
Inference model
Model selection
inference speed and model size is the first priority
model
size(MB)
Top-5 Acc (%)
Inception-v2
44
95.22
ShuffleNet-v2
9.2
88.32
VGG16
527.9
91.21
ResNet-18
44.7
89.29
Model transform
Armnn support Caffe, TF, TFlite, Onnx
Transform model if you used other framework like: pytorch, mxnet
Prepare depended library
File size is always the big issue in Soc( System on a Chip )
Consider the library size when
Image preprocessing tool
opencv: Opencv is a Integrated library for compute vision, however it depended on lots of other libraries. e.g. ffmpeg
opencv size: ~200MB vs stb_image: ~500KB
Cross compile with inference program
Cross compile: compile code for multiple platforms from one development host.
Cross compile command:
gcc-arm-linux-gnueabi -o main main.c ...
Inference model
Inference speed
model
framework
neon
inference time(s)
shuffleNet
tflite
-
0.82
shuffleNet
armnn
with neon
0.34
shuffleNet
armnn
without neon
86
Resume presentation
Edge Computing with CPU accelerator
{"metaMigratedAt":"2023-06-15T22:01:37.669Z","metaMigratedFrom":"Content","title":"Edge Computing with CPU accelerator","breaks":true,"contributors":"[{\"id\":\"d3985cdc-01d6-4ac3-b458-1e932b59f69d\",\"add\":4449,\"del\":2290}]"}