Keywords: 3D Reconstruction, CAD models, Sensor Simulation, Self-Driving
TL;DR: We propose a new method to reconstruct objects from sensory observations that are of high fidelity, part-aware, geometry-aligned and compatible to graphics engine thus enable realistic and controllable simulation efficiently
Abstract: Realistic simulation is key to enabling safe and scalable development of self-driving vehicles. A core component is simulating the sensors so that the entire autonomy system can be tested in simulation. Sensor simulation involves modeling traffic participants, such as vehicles, with high-quality appearance and articulated geometry, and rendering them in real-time. The self-driving industry has employed artists to build these assets. However, this is expensive, slow, and may not reflect reality. Instead, reconstructing assets automatically from sensor data collected in the wild would provide a better path to generating a diverse and large set that provides good real-world coverage. However, current reconstruction approaches struggle on in-the-wild sensor data, due to its sparsity and noise. To tackle these issues, we present CADSim which combines part-aware object-class priors via a small set of CAD models with differentiable rendering to automatically reconstruct vehicle geometry, including articulated wheels, with high-quality appearance. Our experiments show our approach recovers more accurate shape from sparse data compared to existing approaches. Importantly, it also trains and renders efficiently. We demonstrate our reconstructed vehicles in a wide range of applications, including accurate testing of autonomy perception systems.
Student First Author: yes
Supplementary Material: zip
Website: http://www.cs.toronto.edu/~wangjk/publications/cadsim.html
5 Replies
Loading