Keywords: On-device learning, Activation compression, Low-rank approximation, Continual learning
Abstract: On-device learning is essential for personalization, privacy, and long-term adaptation in resource-constrained environments.
Achieving this requires efficient learning, both fine-tuning existing models and continually acquiring new tasks without catastrophic forgetting.
Yet both settings are constrained by the high memory cost of storing activations during backpropagation.
Existing activation compression methods reduce this cost but rely on repeated low-rank decompositions, introducing computational overhead, and have not been explored for continual learning.
We propose LANCE (Low-rank Activation Compression), a framework that performs a one-shot higher-order SVD to obtain a reusable low-rank subspace for activation projection.
This eliminates repeated decompositions, reducing both memory and computation.
Moreover, fixed low-rank subspaces further enable on-device continual learning by allocating tasks to orthogonal subspaces without storing large task-specific matrices.
Experiments show that LANCE reduces activation storage by up to 250$\times$ while maintaining accuracy comparable to full backpropagation on CIFAR-10/100, Oxford-IIIT Pets, Flowers102, and CUB-200.
On continual learning benchmarks (Split CIFAR-100, Split MiniImageNet, 5-Datasets), it achieves performance competitive with orthogonal gradient projection methods at a fraction of the memory cost.
These results position LANCE as a practical and scalable solution for efficient fine-tuning and continual learning on edge devices.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 21896
Loading