Humans disagree with the IoU for measuring object detector localization errorDownload PDFOpen Website

2022 (modified: 02 Nov 2022)CoRR 2022Readers: Everyone
Abstract: The localization quality of automatic object detectors is typically evaluated by the Intersection over Union (IoU) score. In this work, we show that humans have a different view on localization quality. To evaluate this, we conduct a survey with more than 70 participants. Results show that for localization errors with the exact same IoU score, humans might not consider that these errors are equal, and express a preference. Our work is the first to evaluate IoU with humans and makes it clear that relying on IoU scores alone to evaluate localization errors might not be sufficient.
0 Replies

Loading