Abstract: Lane detection plays a critical role in autonomous driving, requiring low-latency response and robustness to varying lighting conditions to support downstream algorithms effectively. Event cameras, with their low latency and high dynamic range, are well-suited for such tasks. However, current event-based lane detection algorithms fail to fully leverage the unique characteristics of event data. Event-based lane features are sparse, and their simple structure often results in confusion with other similar patterns, such as street edges or closely spaced lane markings, due to the lack of texture and sparse events. This presents significant challenges for event-based lane detection. To address these issues, we propose a novel framework built on a momentum distillation paradigm (ELMoD-Net), utilizing pseudo targets to guide model training. This approach enables the learning of fine-grained features, improving the differentiation of similar patterns. Additionally, we introduce an Event-Adaptive feature Localization (E-AL) module within the decoder to extract sufficient information from sparse events. Experimental results on the DET dataset demonstrate the efficacy of the proposed method, achieving significant F1 score improvements of 5.65$\%$ in multi-class and 3.69$\%$ in binary-class lane detection tasks.
External IDs:dblp:journals/ral/ZhangZLQWLZ25
Loading