Keywords: Gaussianity, Generative Models, Guided Generation
TL;DR: We propose a constrained optimization framework that preserves Gaussianity during latent optimization for reward-guided generation.
Abstract: We propose a constrained optimization framework that preserves white Gaussian noise characteristics during latent optimization for reward-guided generation. At its core is a novel constraint formulation that allows efficient projection while tightly characterizing white Gaussian noise. In deep generative models, supplying white Gaussian noise as input is essential for stable and realistic generation, but preserving its characteristics during optimization remains challenging. This challenge is amplified in reward-guided generation, where gradient-based updates can exploit the reward and produce unrealistic or low-quality outputs. Prior methods address this by introducing regularization terms that encourage certain white Gaussian noise properties, particularly in the spectral domain. However, regularization offers only soft penalties and cannot guarantee that the latent vector retains the white Gaussian noise characteristics throughout optimization. To overcome this, we propose a constrained optimization approach that directly projects the latent vector onto a feasible set. Leveraging a bijective mapping to a compact spectral domain, we define constraints that tightly characterize white Gaussian noise and induce a feasible set with a closed-form projection, enabling efficient updates through projected gradient ascent. In experiments on reward-guided text-to-image generation, our approach outperforms regularization-based baselines across four reward functions in terms of reward, sample quality, and maximization speed.
Primary Area: generative models
Submission Number: 4330
Loading