Adapt3D: Lightweight and Adaptive 3D Detection Framework for Mobile GPUs

Published: 01 Jan 2025, Last Modified: 16 May 2025COMSNETS 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Real-time environmental perception is crucial for the safe and effective operation of autonomous systems. However, accurate 3D object detection from LiDAR point clouds is computationally intensive, challenging embedded GPUs with limited processing capability. In this paper, we present Adapt3D, an adaptive 3D object detection system that optimally balances accuracy and inference latency, leveraging advanced algorithms such as DSVT [CVPR’23] and CenterPoint [CVPR’21]. Adapt3D features a Multi-Branch Framework with over 40 execution paths, modifiable via tuning knobs for dynamic real-time scenarios and latency requirements. This adaptability enables Adapt3D to achieve the Pareto optimal balance of accuracy and efficiency consistently, vital for embedded GPU-based 3D perception. Our evaluations on NVIDIA Jetson Orin and Xavier platforms show that Adapt3D not only meets diverse inference latency Service-Level Objectives on three public datasets—from 50 to 350 ms on Waymo, 100 to 250 ms on nuScenes, and 35 to 75 ms on KITTI datasets—but also outperforms established 3D benchmarks such as DSVT [CVPR’23] and CenterPoint [CVPR’21].
Loading