Per-Axis Weight Deltas for Frequent Model Updates

Published: 23 Sept 2025, Last Modified: 11 Nov 2025CCFM OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Fine-Tuning, Adaptation, Serving of LLMs, Model Compression
TL;DR: We compress fine-tuned LLM checkpoints into 1-bit weight deltas with per-axis scaling that reduces storage and latency for multi-variant serving.
Abstract: Serving many task-specialized LLM variants is often limited by the large size of fine-tuned checkpoints and the resulting cold-start latency. Since fine-tuned weights differ from their base model by relatively small structured residuals, a natural approach is to represent them as compressed deltas. We propose a simple 1-bit delta scheme that stores only the sign of the weight difference together with lightweight per-axis (row/column) FP16 scaling factors, learned from a small calibration set. This design preserves the compactness of 1-bit deltas while more accurately capturing variation across weight dimensions, leading to improved reconstruction quality over scalar alternatives. From a systems perspective, a streamlined loader that transfers packed deltas in a single operation per module reduces cold-start latency and storage overhead, with artifacts several times smaller than a full FP16 checkpoint. The method is drop-in, requires minimal calibration data, and maintains inference efficiency by avoiding dense reconstruction. Our experimental setup and source code are available at https://anonymous.4open.science/r/Per-Axis-Weight-Deltas-for-Frequent-Model-Updates-0F1C/.
Serve As Reviewer: ~Radostin_Cholakov1
Submission Number: 40
Loading