Keywords: 3D Object Detection, Autonomous driving
Abstract: 3D object detection for autonomous driving relies on multiple sensors, but their parallel use increases power consumption, limiting operational time and performance. In addition, not all scenarios require every sensor. An excessive number of sensors not only constrains inference performance but can also degrade accuracy by introducing noise. To achieve the Pareto optimality between accuracy and efficient performance, we introduce $\texttt{AdaSensor}$, an $\textbf{Ada}$ptive $\textbf{Sensor}$ selection framework for power-efficient 3D object detection. The $\texttt{AdaSensor}$ first designs the Mixture of Sensors (MoS) module that employs a lightweight sensor router to choose the necessary sensors during inference. This selective sensor activation reduces the computational load by processing fewer inputs and lowers power consumption by deactivating unused sensors. However, a naive MoS suffers from inference instability, rooted in the latency overhead from frequent sensor switching. To mitigate this, $\texttt{AdaSensor}$ incorporates a novel non-congested switching policy to judiciously limit the switching frequency, which enhances system stability and efficiency while extending sensor lifetime. To demonstrate the effectiveness and efficiency of our method, we evaluate $\texttt{AdaSensor}$ on the classical autonomous driving computing platform Nvidia Jetson Orin. On the nuScenes dataset, our method effectively reduces system power consumption by 11.7\% and inference latency by 11.3\%. The code will be made publicly available.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 10007
Loading