Keywords: Neural Rendering, 3D Reconstruction, Novel View Synthesis, Geometric Constraints, High Dynamic Range (HDR), Tone Mapping
TL;DR: Unified HDR-aware Gaussian splatting with geometric, semantic, and tone-mapped constraints for robust high-fidelity novel view synthesis.
Abstract: Recent advances in neural rendering have markedly improved 3D reconstruction and novel view synthesis, yet methods still degrade under complex illumination, weak or low‑texture regions, and cross‑view inconsistencies from camera ISP pipelines. We propose a unified scene representation framework that densifies geometric supervision via depth‑guided virtual view generation plus multi‑view consistency priors, improving fidelity in weak‑texture and noisy areas; enforces view‑independent radiance consistency through bilateral filtering that removes ISP enhancement residuals, decoupling in‑camera processing from radiance field optimization; and performs semantics‑guided deferred 3D Gaussian field reconstruction fusing pretrained high‑level semantic features with material parameters for challenging materials and lighting. It further models scene radiance explicitly with a learnable asymmetric tone‑mapping grid to more accurately infer pixel colors and maintain HDR detail, and employs a coarse‑to‑fine optimization schedule improving stability and convergence. Experiments across indoor and outdoor datasets show consistent quantitative and qualitative gains in reconstruction fidelity and novel view synthesis, with robustness under sparse inputs, weak textures, complex illumination, and HDR conditions, underscoring the benefits of integrating geometric, photometric, and semantic priors in real-world deployments.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 10598
Loading