Keywords: PathXfer, Path Compression, Few-Shot Learning
TL;DR: PathXfer is a few-shot framework that compresses multi-step sampling paths into few steps while preserving generative fidelity across flow- and diffusion-based models.
Abstract: Traditional approaches to accelerate sampling in generative models rely on distillation, which requires large datasets and costly training. We instead view the quality gap between multi-step and few-step sampling as a transferable property, and introduce PathXfer, a few-shot framework that transfers multi-step fidelity to few-step sampling. PathXfer employs LoRA-based lightweight adaptation together with a Path Compression Loss, enabling effective fidelity preservation using only 16 samples, without retraining the entire model. Experiments show that PathXfer compresses sampling from 20 to 2 steps on Flux, a flow-based generative model, with only minor perceptual degradation, and also yields consistent improvements on diffusion models such as SDXL, demonstrating that the approach generalizes across paradigms. These results highlight few-shot fidelity transfer as an efficient and practical complement to distillation for accelerating generative sampling.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 12818
Loading