Keywords: 3D Reconstruction, Gaussian Splatting, Sensor Simulation
TL;DR: AstroSplat enables reusable 3D Gaussian assets for autonomous vehicle simulation by optimizing decoders per Gaussian rather than per scene, allowing high-fidelity transfer of learned representations across different scenarios
Abstract: A key component to enable autonomous vehicles (AV) at scale is realistic camera and lidar data simulation for exhaustive validation and testing. To this end 3D Gaussian splatting (3DGS) has gained popularity to simulate camera data due to its high fidelity and rendering speed. A recent work, SplatAD, is the first 3DGS-based method that also renders lidar data in addition to camera data. To capture view-dependent effects, SplatAD uses decoders for camera and lidar renderings that are optimized per scene. However, using scene-specific decoders limits the reusability of the learned Gaussians for the assets across scenes due to scene-specific learned feature representations. Enabling such reusability is crucial to generate rare-event-scenarios at scale for AV stack evaluation and synthetic data creation. Addressing this key limitation, we propose AstroSplat, oriented toward asset transfer across scenes with learned representations that are memory-efficient. Instead of optimizing the decoders per scene, AstroSplat optimizes them per Gaussian enabling high fidelity transfer of assets across scenes. Empirical results across a suite of benchmark datasets and tasks demonstrate that AstroSplat is competitive with prior methods in terms of reconstruction quality, both for camera and lidar renderings. In the asset transfer task, AstroSplat outperforms SplatAD by 10$^4\times$ on image generation quality metrics.
Primary Area: applications to robotics, autonomy, planning
Submission Number: 21031
Loading