Nabla-R2D3: Effective and Efficient 3D Diffusion Alignment with 2D Rewards

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: 3D Generation, Diffusion Model, Alignment, Reward Finetuning, Reinforcement Learning
TL;DR: We extend a new RL-based gradient-informed finetuning method to the task of reward finetuning/alignment for 3D native diffusion models.
Abstract: Generating high-quality and photorealistic 3D assets remains a longstanding challenge in 3D vision and computer graphics. Although state-of-the-art generative models, such as diffusion models, have made significant progress in 3D generation, they often fall short of human-designed content due to limited ability to follow instructions, align with human preferences, or produce realistic textures, geometries, and physical attributes. In this paper, we introduce Nabla-R2D3, a highly effective and sample-efficient reinforcement learning alignment framework for 3D-native diffusion models using 2D rewards. Built upon the recently proposed Nabla-GFlowNet method for reward finetuning, our Nabla-R2D3 enables effective adaptation of 3D diffusion models through pure 2D reward feedback. Extensive experiments show that, unlike naive finetuning baselines which either fail to converge or suffer from overfitting, Nabla-R2D3 consistently achieves higher rewards and reduced prior forgetting within few finetuning steps.
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 7871
Loading