Keywords: Parameter Generation
Abstract: Fine-tuning is the dominant strategy for adapting pre-trained models. However, it requires bulky gradient computation and model updates, which prevent real-time personalization. Even efficient variants such as LoRA incur a non-negligible latency and computation overhead. We explore a radically different approach: instead of training, we generate model parameters conditioned on data. Inspired by parameter generation via diffusion, we introduce a data-conditioned parameter generator that instantly produces personalized weights. In few-shot settings, our method outperforms LoRA in both performance and efficiency, making adaptation in less than one second. This demonstrates a feasible path toward practical, real-world, real-time model personalization.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 24406
Loading