Abstract: This paper proposes a novel method to extend Gaussian Splatting (3DGS) to the audio domain, enabling novel-view acoustic synthesis solely using audio data. While recent advancements in 3DGS have significantly improved novel-view synthesis in the visual domain, its application to audio has been overlooked, despite the critical role of spatial audio for immersive AR/VR experiences. Our method addresses this gap by constructing an audio point cloud from audio at source viewpoints and rendering spatial audio at arbitrary viewpoints. Experimental results show that our method outperforms existing approaches relying on audio-visual information, demonstrating the feasibility of extending 3DGS to audio.
Loading