Semantic F1 Scores: Fair Evaluation Under Fuzzy Class Boundaries

ICLR 2026 Conference Submission14344 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: evaluation, semantic, f1 score, subjective, fuzzy
TL;DR: We propose the Semantic F1 Scores, a superset of F1 scores catered for subjective or fuzzy single-label and multi‑label classification that quantifies semantic relatedness between predicted and gold label sets
Abstract: We propose Semantic F1 Scores, novel evaluation metrics for subjective or fuzzy multi‑label classification that quantify semantic relatedness between predicted and gold labels. Unlike the conventional F1 metrics that treat semantically related predictions as complete failures, Semantic F1 incorporates a label similarity matrix to compute soft precision-like and recall-like scores, from which the Semantic F1 scores are derived. Unlike existing similarity-based metrics, our novel two-step precision-recall formulation enables the comparison of label sets of arbitrary sizes without discarding labels or forcing matches between dissimilar labels. By granting partial credit for semantically related but nonidentical labels, Semantic F1 better reflects the realities of domains marked by human disagreement or fuzzy category boundaries. In this way, it provides fairer evaluations: it recognizes that categories overlap, that annotators disagree, and that downstream decisions based on similar predictions lead to similar outcomes. Through theoretical justification and extensive empirical validation on synthetic and real data, we show that Semantic F1 demonstrates greater interpretability and ecological validity. Because it requires only a domain‑appropriate similarity matrix, which is robust to misspecification, and not a rigid ontology, it is applicable across tasks and modalities.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 14344
Loading