Transfer Learning for Convolutional Neural Networks in Tiny Deep Learning Environments

Published: 01 Jan 2022, Last Modified: 18 Feb 2025PCI 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Tiny Machine Learning (TinyML) and Transfer Learning (TL) are two widespread methods of successfully deploying ML models to resource-starving devices. Tiny ML provides compact models, that can run on resource-constrained environments, while TL contributes to the performance of the model by using pre-existing knowledge. So, in this work we propose a simple but efficient TL method, applied to three types of Convolutional Neural Networks (CNN), by retraining more than the last fully connected layer of a CNN in the target device, and specifically one or more of the last convolutional layers. Our results shown that our proposed method (FxC1) achieves about increase in accuracy and increase in convergence speed, while it incurs a bit larger energy consumption overhead, compared to two baseline techniques, namely one that retrains the last fully connected layer, and another that retrains the whole network.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview