Keywords: Neural Rendering; 3D Generation; Relighting
Abstract: Generating relightable 3D assets from a single image is fundamentally ill-posed: geometry, material, and lighting are deeply entangled, making both principle-driven decomposition and end-to-end neural generation brittle or inconsistent. We propose RelitTrellis, a homogenize-then-synthesize framework built on a Lighting-Homogenized Structured 3D Latent (LH-SLAT). LH-SLAT attenuates shadows and unstable highlights while preserving geometry-consistent diffuse cues, providing a well-conditioned substrate for relighting. From a casually lit input, RelitTrellis first derives LH-SLAT and then synthesizes 3D Gaussian parameters conditioned on target illumination, efficiently capturing higher-order light–material interactions such as soft shadows and indirect reflections. Experiments on Digital Twin Category, Aria Digital Twin, and Objaverse benchmarks show that RelitTrellis achieves state-of-the-art quality, strong cross-object and
cross-illumination generalization, consistent multi-view rendering, and real-time feed-forward inference without per-object optimization.
Primary Area: generative models
Submission Number: 19007
Loading