Clinical Uncertainty Impacts Machine Learning Evaluations
Keywords: evaluation, annotation, uncertainty, metrics
TL;DR: Show how uncertainty impacts evaluation and outline recommendations on how to improve.
Track: Findings
Abstract: Clinical dataset labels are rarely certain as annotators disagree and confidence is not uniform across cases.
Typical aggregation procedures, such as majority voting, obscure this variability.
In simple experiments on medical imaging benchmarks, accounting for the confidence in binary labels significantly impacts model rankings.
We therefore argue that machine-learning evaluations should explicitly account for annotation uncertainty using probabilistic metrics that directly operate on distributions.
These metrics can be applied independently of the annotations' generating process, whether modeled by simple counting, subjective confidence ratings, or probabilistic response models.
They are also computationally lightweight, as closed-form expressions have linear-time implementations once examples are sorted by model score.
We thus urge the community to release raw annotations for datasets and to adopt uncertainty-aware evaluation so that performance estimates may better reflect clinical data.
General Area: Applications and Practice
Specific Subject Areas: Evaluation Methods & Validity
PDF: pdf
Data And Code Availability: Yes
Ethics Board Approval: No
Entered Conflicts: I confirm the above
Anonymity: I confirm the above
Submission Number: 89
Loading