Abstract: Ensuring traffic safety in autonomous vehicles requires accurate weather and severity recognition, especially when operating within a defined ODD. However, most existing studies and datasets focus on single-label weather recognition and largely overlook severity classification, which limits the adaptability of autonomous systems. To address this, we created a comprehensive multi-label weather dataset with light, moderate, and heavy severity levels based on quantitative visibility thresholds aligned with ODDs. We constructed this dataset by prompting multiple VLMs to apply these criteria, then validated annotation quality against a human-labeled gold standard. On this new dataset, we introduce the Multi-Weather and Severity Classifier (MWSC), a novel multi-modal framework that simultaneously handles weather recognition and severity classification. MWSC employs a combination of cross-attention mechanisms to extract global and local image features and integrates them with text captions generated from a standardized, closed-vocabulary template for weather and severity to create robust feature associations for classifier training. Our experiments establish a new benchmark for environment perception in autonomous driving, where MWSC significantly outperforms widely-used baseline models across key metrics including mAP, accuracy, and F1-score, advancing vision-based techniques for safer and more reliable autonomous navigation in complex weather scenarios. The full dataset and code are publicly available in our repository at https://github.com/Yonsei-STL/MWSC.git, including licenses of all sources, human-annotated gold set, image lists and splits, prompts, and trained weights for reproducibility.
External IDs:doi:10.1109/access.2025.3645363
Loading