Abstract: Production-level Autonomous Driving Systems (ADSs), such as Google Waymo and Baidu Apollo, typically rely on the multi-sensor fusion (MSF) strategy to perceive their surroundings. This strategy increases the perception robustness by combining the respective strengths of the cameras and LiDAR, directly affecting the safety-critical driving decisions of autonomous vehicles (AVs). However, in real-world autonomous driving scenarios, both cameras and LiDAR are prone to various faults that can significantly impact the decision-making and subsequent behaviors of ADSs. It is important to thoroughly test the robustness of MSF during development. Existing testing methods only focus on the identification of corner cases that MSF fails to detect. However, there is still a lack of investigation on how sensor faults affect the system-level behaviors of ADSs. To address this gap, we present FADE, the first testing methodology to comprehensively assess the fault tolerance of MSF perception-based ADSs. We systematically build fault models for both cameras and LiDAR in AVs and inject these faults into MSF-based ADSs to test their behaviors in various testing scenarios. To effectively and efficiently explore the parameter spaces of sensor fault models, we design a feedback-guided differential fuzzer to uncover safety violations of ADSs caused by the injected faults. We evaluate FADE on Baidu Apollo, a representative and practical industrial ADS. The evaluation results demonstrate the practical values of FADE, and disclose some useful findings. We further conduct physical experiments using a Baidu Apollo 6.0 EDU AV to validate these findings in real-world settings.
Loading