Keywords: generative models, diffusion models, stochastic interpolants, kernel methods, RKHS, training-free, model ensembling, domain adaptation, pretrained models
TL;DR: Training-free generative modeling by building kernels from pretrained diffusion models, enabling model combination and domain adaptation through linear solves instead of neural network training.
Abstract: Generative diffusion models, including stochastic interpolants and score-based approaches, require learning time-dependent drift or score functions through expensive neural network training. Here we avoid these computations by representing the drift in a reproducing kernel Hilbert space, reducing the learning problem to solving linear systems. The key challenge becomes selecting kernels with sufficient expressiveness for the drift learning task. We address this by constructing kernels from pretrained drift or score functions, leveraging the fact that our linear systems depend only on gradients of kernel features---not the features themselves. Since pretrained drifts provide these gradients directly, we can build expressive kernels without access to the underlying feature representations. This enables seamless combination of multiple pretrained models at inference time and cross-domain enhancement through the same framework. Experiments demonstrate competitive sample quality with significantly reduced computation, consistent ensemble improvements, and successful cross-domain enhancement---even cheap, low-quality models can match expensive high-quality models when combined through our framework.
Primary Area: generative models
Submission Number: 16859
Loading