PRTGS: Precomputed Radiance Transfer of Gaussian Splats for Real-Time High-Quality Relighting

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: We proposed Precomputed Radiance Transfer of Gaussian Splats (PRTGS), a real-time high-quality relighting method for Gaussian splats in low-frequency lighting environments that captures soft shadows and interreflections by precomputing 3D Gaussian splats' radiance transfer. Existing studies have demonstrated that 3D Gaussian splatting (3DGS) outperforms neural fields in efficiency for dynamic lighting scenarios. However, the current relighting method based on 3DGS still struggling in computing high-quality shadow and indirect illumination in real time for dynamic light, leading to unrealistic rendering results. We solve this problem by precomputing the expensive transport simulations required for complex transfer functions like shadowing, the resulting transfer functions are represented as dense sets of vectors or matrices for every Gaussian splat. We introduce distinct precomputing methods tailored for training and rendering stages, along with unique ray tracing and indirect lighting precomputation techniques for 3D Gaussian splats to accelerate training speed and compute accurate indirect lighting related to environment light. Experimental analyses demonstrate that our approach achieves state-of-the-art visual quality while maintaining competitive training times and importantly allows high-quality real-time (30+ fps) relighting for dynamic light and relatively complex scenes at 1080p resolution.
Primary Subject Area: [Content] Vision and Language
Secondary Subject Area: [Content] Vision and Language
Relevance To Conference: 3D Gaussian Splatting (3DGS) has garnered significant attention from the community as a promising approach for various tasks in 3D scene reconstruction and has replaced Neural Radiance Fields in many scenarios, such as [1] and [2]. The utilization of 3DGS presents the potential for individuals to reconstruct their surrounding environment using contemporary technological devices such as smartphones and computers in minutes. Furthermore, individuals can modify their reconstructed world according to their unique preferences. This technology can greatly facilitate the process of creating virtual and augmented reality experiences which makes it particularly attractive for multimedia applications and could potentially spur innovation within the multimedia industry. The foundation for achieving this lies in a real-time and high-quality inverse rendering, relighting, and scene editing method. In this paper, we proposed such a method, providing the possibility for people to create worlds based on the existing world. [1] Zeng J, Bao C, Chen R, et al. Mirror-NeRF: Learning Neural Radiance Fields for Mirrors with Whitted-Style Ray Tracing[C]//Proceedings of the 31st ACM International Conference on Multimedia. 2023: 4606-4615. [2] Meng J, Li H, Wu Y, et al. Mirror-3DGS: Incorporating Mirror Reflections into 3D Gaussian Splatting[J]. arXiv preprint arXiv:2404.01168, 2024.
Supplementary Material: zip
Submission Number: 801
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview