Abstract: High-definition (HD) mapping is essential for autonomous driving and localization services, providing detailed lane-level road graphs for various applications. Current methodologies primarily segment the geometric structure of lane lines from remote sensing images and extract vectorized road graphs using heuristic methods. However, these approaches fail to adequately account for lane instance information and topological structures. Furthermore, the semiautomated process imposes constraints on the spatial scalability of HD maps. To overcome these limitations, we propose automated two-stage method for constructing lane-level road graphs (AutoLRG), a two-stage method for lane-level road graph construction. In lane geometry prediction, we propose a lane segmentation network based on directional supervision and multimodal fusion, incorporating an angle-direction loss and a cross-attention-based fusion module to enhance lane perception and connectivity. In lane instance modeling, we develop a Transformer-based lane decoder that leverages an object detection architecture to extract vectorized lane instances and road vertices in an end-to-end manner. In lane topology construction, we introduce a “road segment-intersection” decoupled model, which establishes the connectivity relationships of intersection nodes based on traffic regulations to form a lane-level topological directed road graph. The ablation studies conducted on the two benchmark datasets (UrbanLaneGraph and OpenSatMap) have validated the effectiveness of the method. Comparative experiments with other methods demonstrate that our approach exhibits superior performance in lane segmentation, instance modeling, and topology construction. Code is available at https://github.com/EchoQiHeng/AutoLRG
External IDs:dblp:journals/tgrs/QiSYGDT25
Loading