DiL: An Explainable and Practical Metric for Abnormal Uncertainty in Object Detection

Published: 2025, Last Modified: 02 Aug 2025WACV 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Although object detection models are widely used, their predictive performance has been shown to deteriorate when faced with abnormal scenes. Such abnormalities can occur naturally (by partially occluded or out-of-distribution objects) or deliberately (in the case of an adversarial attack). Existing uncertainty quantification methods, such as object detection evaluation metrics and label-uncertainty quantification techniques, do not consider the abnormalities' effect on the model's internal decision-making process. Furthermore, practical methods that consider the effects of abnormalities (such as abnormality detection and mitigation) are designed to deal with one type of abnormality. We present distinctive localization (DiL), an unsupervised, practical and explainable metric that quantitatively interprets any type of abnormality and can be leveraged for preventive purposes. By utilizing XAI techniques (saliency maps), DiL maps the objectness of a given scene and captures the model's inner uncertainty regarding the identified (and missed) objects. DiL was evaluated across nine use cases, including partially occluded and out-of-distribution objects, as well as adversarial patches, in both physical and digital spaces, on benchmark datasets, and our newly E-PO dataset (generated with DALL-E 2). Our results show that DiL: i) successfully interprets and quantifies an abnormality's effect on the model's decision-making process, regardless of the abnormality type; and ii) can be leveraged to detect and mitigate this effect.
Loading