PieBridge: Fast and Parameter-Efficient On-Device Training via Proxy Networks

Published: 01 Jan 2024, Last Modified: 19 Feb 2025SenSys 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: On-device training Neural Networks (NNs) has been a crucial catalyst towards privacy-preserving and personalized mobile intelligence. Recently, a novel training paradigm, namely Parameter-Efficient Training (PET), is attracting attention in both the machine learning and system community. In our preliminary measurements, we find PET well-suited for on-device scenarios; yet, its parameter efficiency does not translate coequal to time efficiency on resource-constrained devices, as the training time is dominated by the frozen layers.To this end, this work presents PieBridge, an on-device training framework with both time and parameter efficiency. Its key idea is to dynamically approximate the frozen layers to cheaper ones (subnets) with data awareness during PET To achieve effective and efficient approximate training, we introduce (1) a pre-training-assisted on-cloud subnets generation method and (2) an edge-friendly on-device data-aware subnets routing method. The subnets generation method performs fine-grained pruning and latent space alignment to generate a series of high-quality proxy subnets with varying speed-accuracy trade-offs for the deployment-ready NN. The subnets routing method perceives data diversity from two unique perspectives (referred to as importance and difficulty). The routing strategy is provided by an offline-learning and online-estimation fusion, which is accurate, end-to-end and cost-effective on devices. Through extensive experiments, we show that PieBridge exhibits up to 2.5X training speedup compared to state-of-the-art PET methods, and up to 6.6X speedup compared to traditional full model training and other on-device training frameworks, without compromising parameter efficiency and accuracy.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview