Keywords: Inverse Problems, Flow Matching, Posterior Sampling, ODE-based Inference, Training-Free Methods
TL;DR: We propose LFlow, a training-free framework for solving linear inverse problems using pretrained latent flow priors with theoretically grounded posterior guidance, achieving superior reconstruction quality over latent diffusion baselines.
Abstract: Recent advances in *inverse problem* solving have increasingly adopted flow *priors* over diffusion models due to their ability to construct straight probability paths from noise to data, thereby enhancing efficiency in both training and inference. However, current flow-based inverse solvers face two primary limitations: (i) they operate directly in pixel space, which demands heavy computational resources for training and restricts scalability to high-resolution images, and (ii) they employ guidance strategies with *prior*-agnostic posterior covariances, which can weaken alignment with the generative trajectory and degrade posterior coverage. In this paper, we propose **LFlow** (**L**atent Refinement via **Flow**s), a *training-free* framework for solving linear inverse problems via pretrained latent flow priors. LFlow leverages the efficiency of flow matching to perform ODE sampling in latent space along an optimal path. This latent formulation further allows us to introduce a theoretically grounded posterior covariance, derived from the optimal vector field, enabling effective flow guidance. Experimental results demonstrate that our proposed method outperforms state-of-the-art latent diffusion solvers in reconstruction quality across most tasks.
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 16671
Loading