U-Turn Diffusion

TMLR Paper1514 Authors

26 Aug 2023 (modified: 04 Dec 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: We present a comprehensive examination of score-based diffusion models of AI for generating synthetic images. These models hinge upon a dynamic auxiliary time mechanism driven by stochastic differential equations, wherein the score function is acquired from input images. Our investigation unveils a criterion for evaluating efficiency of the score-based diffusion models: the power of the generative process depends on the ability to de-construct fast correlations during the reverse/de-noising phase. To improve the quality of the produced synthetic images, we introduce an approach coined "U-Turn Diffusion". The U-Turn Diffusion technique starts with the standard forward diffusion process, albeit with a reduced duration compared to conventional settings. Subsequently, we execute the standard reverse dynamics, initialized with the concluding configuration from the forward process. This U-Turn Diffusion procedure, combining forward, U-turn, and reverse processes, creates a synthetic image approximating an independent and identically distributed (i.i.d.) sample from the probability distribution implicitly described via input samples. To analyze relevant time scales we employ various analytical tools, including auto-correlation analysis, weighted norm of the score-function analysis, and Kolmogorov-Smirnov Gaussianity test. The tools guide us to establishing that analysis of the Kernel Inception Distance, a metric comparing the quality of synthetic samples with real data samples, reveals the optimal U-turn time.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Valentin_De_Bortoli1
Submission Number: 1514
Loading