Structure-Activation Synergy: A Dual Efficiency Framework for Parameter-Memory Optimized Transfer Learning
Abstract: While parameter-efficient transfer learning (PETL) successfully reduces trainable parameters for adapting large pre-trained models, conventional methods exhibit limited effectiveness in decreasing activation memory consumption - a critical bottleneck for deployment on resource-constrained devices. We present Structure-Activation Synergy (S2A), an innovative framework achieving dual optimization of parameters and memory through two synergistic mechanisms: (1) Structural activation modules (bias/prompt/side adaptations) that strategically minimize both parametric complexity and intermediate feature storage requirements, and (2) Derivative-aware 4-bit quantization for non-parametric operators that maintains model fidelity through gradient-informed precision allocation. Extensive evaluations across multiple architectures (ViT, Swin, ResNet) and datasets (ImageNet-1K, CIFAR, DomainNet) demonstrate S2A's superior efficiency, reducing GPU memory consumption by 75\% (4.2 average reduction) while maintaining 98.7\% of full fine-tuning accuracy with only 0.9\% tunable parameters. This hardware-aware paradigm establishes new state-of-the-art in efficient model adaptation, offering practical deployment advantages through simultaneous parameter and memory optimization without compromising model capability
Loading