SOGDet: Semantic-Occupancy Guided Multi-view 3D Object Detection

Published: 05 Jan 2024, Last Modified: 12 Nov 2025OpenReview Archive Direct UploadEveryoneCC BY 4.0
Abstract: n the field of autonomous driving, accurate and compre- hensive perception of the 3D environment is crucial. Bird’s Eye View (BEV) based methods have emerged as a promis- ing solution for 3D object detection using multi-view images as input. However, existing 3D object detection methods of- ten ignore the physical context in the environment, such as sidewalk and vegetation, resulting in sub-optimal perfor- mance. In this paper, we propose a novel approach called SOGDet (Semantic-Occupancy Guided Multi-view 3D Ob- ject Detection), that leverages a 3D semantic-occupancy branch to improve the accuracy of 3D object detection. In particular, the physical context modeled by semantic occu- pancy helps the detector to perceive the scenes in a more holistic view. Our SOGDet is flexible to use and can be seamlessly integrated with most existing BEV-based meth- ods. To evaluate its effectiveness, we apply this approach to several state-of-the-art baselines and conduct extensive experiments on the exclusive nuScenes dataset. Our results show that SOGDet consistently enhance the performance of three baseline methods in terms of nuScenes Detection Score (NDS) and mean Average Precision (mAP). This in- dicates that the combination of 3D object detection and 3D semantic occupancy leads to a more comprehensive percep- tion of the 3D environment, thereby aiding build more ro- bust autonomous driving systems. The codes are available at: https://github.com/zhouqiu/SOGDet.
Loading