On-Device Last-Layer Learning

Machine learning on an ultra-low-power microcontroller

by Reto Da Forno

Emerging edge intelligence applications motivate us to deploy deep learning models on resource-constrained embedded systems. However, state-of-the-art deep neural networks (DNN) often require intensive resources from computation, memory, and energy. Researchers propose to trim down DNN models through compression techniques such as quantization or pruning. Among them, int8-quantized networks are a popular choice for off-the-shelf microcontroller platforms, since it achieves a good trade-off between accuracy and compression ratio, as well as hardware efficiency. On the other hand, when facing data privacy issues and/or limited communication bandwidth, the models need to be (re)trained on these resource-constrained embedded devices.

We provide example code for on-device training on ultra-low-power 32-bit ARM cortex microcontrollers, which targets learning the last layer of an int8-quantized external pageDS-CNN and re-quantizing the trained last layer into int8 format.

The code is available on our gitlab repository.

 

JavaScript has been disabled in your browser