NOVA-GS: Noise-Aware View-Consistent Gaussian Splatting for Low-Light Novel View Synthesis

Published: 03 May 2026, Last Modified: 03 May 2026CVPR 2026 Workshop 3D4S OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Low-light 3D Reconstruction, 3D Gaussian Splatting, Novel View Synthesis, Noise-Aware Modeling, Self-Supervised Denoising, Multi-View Consistency, Spherical Harmonic Loss
TL;DR: We propose NOVA-GS, a unified low-light 3D Gaussian Splatting framework that jointly models image refinement, noise-aware denoising, and reconstruction within a single optimization process, without ground-truth supervision.
Abstract: Reconstructing 3D scenes under real-world low-light conditions remains highly challenging due to severe sensor noise, low signal-to-noise ratios, and color degradation, which corrupt photometric supervision and destabilize geometry estimation. Existing approaches suffer from two key limitations: unreliable initialization, as traditional Structure-from-Motion (SfM) fails under degraded inputs, and inconsistent geometry arising from per-view 2D enhancement methods that ignore cross-view coherence. Consequently, current methods either rely on well-lit reference data for initialization, which is impractical in real-world low-light scenarios, or produce unstable multi view geometry in extreme conditions. To address these challenges, we propose NOVA-GS, a unified noise-aware framework for low-light 3D Gaussian Splatting that subsumes enhancement, denoising, and geometry optimization within a single process. We first leverage VGGT-based feed-forward estimation to obtain robust camera poses and geometry directly from degraded inputs, which obviates the need for SfM. Building on this initialization, our framework consists of three coupled modules: a structure-aware enhancement module for exposure correction, a self-supervised denoising module with targeted blind-spot masking to generate clean pseudo-supervision, and a consistency-driven 3D Gaussian Splatting optimization that enforces cross-view geometric constraints. We further introduce a noise-guided spherical harmonic regularization to suppress view-dependent artifacts in noisy regions. By jointly modeling initialization and optimization, our method mitigates both initialization failure and cross-view inconsistencies, preventing noise propagation into geometry and appearance. Extensive experiments across diverse real-world low-light datasets demonstrate improved geometric fidelity, color consistency, and robustness without requiring paired supervision or well-lit references.
Supplementary Material: pdf
Submission Number: 18
Loading