Refinement Bird's Eye View Feature for 3D Lane Detection with Dual-Branch View Transformation Module

Published: 01 Jan 2024, Last Modified: 08 Apr 2025ICASSP 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Detecting 3D lane lines from images is a fundamental challenge and an ill-posed problem in autonomous driving. Existing methods are limited by scene robustness and computational efficiency. This paper introduces an innovative 3D lane detection method that addresses the challenges of lane detection in autonomous driving. Our approach is based on a simple yet efficient view transformation module and layer-by-layer refined bird’s-eye-view (BEV) features. First, we introduce a module for dual-branch view transformation that combines deformable convolutions and view relation modules to convert front view features into BEV features. This enhances scene robustness across various data scenarios. Additionally, we suggest an auxiliary training head for inverse view transformation that offers supplementary supervisory information. Moreover, we progressively refine the BEV features, making use of features from different levels. The results of our experiment indicate the supremacy of our approach on two datasets, as it achieved a considerable increase in F1-score in comparison to preexisting methods.
Loading