Keywords: Text-to-3D generation; Diffusion model; Linearized Lookahead
Abstract: Text-to-3D generation based on score distillation of pre-trained 2D diffusion models has gained increasing interest, with variational score distillation (VSD) as a remarkable example.
VSD proves that vanilla score distillation can be improved by introducing an extra score-based model, which characterizes the distribution of images rendered from 3D models, to correct the distillation gradient.
Despite the theoretical foundations, VSD, in practice, is likely to suffer from slow and sometimes ill-posed convergence.
In this paper, we perform an in-depth investigation of the interplay between the introduced score model and the 3D model, and find that we can simply adjust their optimization order to improve the generation quality.
By doing so, the score model looks ahead to the current 3D state and hence yields more reasonable corrections.
Nevertheless, naive lookahead VSD may suffer from unstable training in practice due to the potential over-fitting.
To address this, we propose to use a linearized variant of the model for score distillation, giving rise to the Linearized Lookahead Variational Score Distillation ($L^2$-VSD).
$L^2$-VSD can be realized efficiently with forward-mode autodiff functionalities of existing deep learning libraries.
Extensive experiments validate the efficacy of $L^2$-VSD, revealing its clear superiority over prior score distillation-based methods.
We also show that our method can be seamlessly incorporated into any other VSD-based text-to-3D framework.
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9391
Loading