Image-to-Image Bayesian Flow Networks With Structurally Informative Priors

Published: 2025, Last Modified: 30 Jan 2026IEEE Trans. Image Process. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Generative models represented by diffusion models have recently shown great potential in image generation. They usually use a reverse iteration process to map noise into the data. However, for many real-world applications such as image restoration and translation, the model input comes from a distribution that is not random noise, making it difficult for these models to adapt directly to these tasks. In this paper, we introduce Image-to-Image Bayesian Flow Networks (I2I-BFNs), a novel framework for general-purpose image-to-image translation (I2I) that operates within the parameter space of distributions. This method upholds Gaussian distributions over pixel intensities, refining distribution parameters through closed-form Bayesian inference, steered by the network’s predictions for the target image. An essential aspect of our approach is the utilization of the conditional image as a robust prior parameter, initializing the translation process from a deterministic, clean image to reduce variance and produce interpretable generation. Additionally, we introduce a skip sampling technique that enhances the efficiency of I2I-BFNs, facilitating rapid translation in diverse image restoration and general I2I tasks. Our experimental evaluations showcase the model’s competitive edge in various settings, underscoring its efficacy and adaptability. This work contributes new insights and opportunities for the large-scale development of efficient conditional generation systems.
Loading