Keywords: Gaussian Splatting, Neural Radiance Field, Tensor Decomposition
TL;DR: We present RingLight-GS, a compact view-dependent rendering framework that factorizes appearance into base color and directional residual, using Neural Tensor Ring decomposition to model high-frequency lighting with low storage.
Abstract: 3D Gaussian Splatting (3DGS) achieves impressive novel view synthesis in real-time by directly rendering Gaussian primitives. However, it incurs substantial storage demands and struggles to model high-frequency, view-dependent appearance effects under complex illumination. We introduce RingLight-GS, a compact framework that effectively models scene color in 3DGS, delivering high-quality rendering under complex lighting while greatly reducing storage costs. The scene color is separated into a view-independent base color and a view-dependent residual color by disentangling static albedo from dynamic lighting, with the base color learning similarity to the 3DGS opacity. Specifically, the residual color is derived from view-dependent appearance features via a neural tensor ring regression model, influenced by spatial positions and viewing directions. Extensive experiments on synthetic and real-world datasets demonstrate that RingLight-GS consistently outperforms both NeRF-based and 3DGS-based baselines. It delivers sharper highlights, better material consistency, and lower perceptual error with minimal memory overhead.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 11038
Loading