RoSA Dataset: Road construct zone Segmentation for Autonomous Driving

Published: 11 Aug 2024, Last Modified: 20 Sept 2024ECCV 2024 W-CODA Workshop Full Paper TrackEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Autonomous Driving, Roadwork zone, Vision language
TL;DR: Our study proposes a novel method for segmenting construction zones in video footage to improve distant detection and safe vehicle rerouting on highways, addressing the limitations of current object detection approaches.
Subject: Corner case mining and generation for autonomous driving
Confirmation: I have read and agree with the submission policies of ECCV 2024 and the W-CODA Workshop on behalf of myself and my co-authors.
Abstract: Current research on road construction environment perception primarily focuses on the detection of objects and signs indicating roadwork. However, this approach requires an additional cognitive step for drivers to fully recognize the extent of construction areas, complicating immediate recognition, especially on highways. Identifying the start of construction zones from a distance is crucial for safe and flexible vehicle rerouting. Existing object detection methods face challenges in identifying these zones from afar due to the small size of marker cones, known as lava cones, which are often spaced widely apart. This can lead to navigational issues when vehicles traverse these gaps. To address these limitations, we propose a novel method that segments construction areas in video footage collectively, enabling the detection of continuous zones from a distance. This approach allows vehicles to adjust their driving paths safely and efficiently. Our study involves the collection of images from Korean road environments over three years, and we intend to release a subset of these images with corresponding labeling data to contribute to the field.
Supplementary Material: zip
Submission Number: 16
Loading