The Semantic Shift BenchmarkDownload PDF

Published: 12 Jul 2022, Last Modified: 05 May 2023Shift Happens 2022 PosterReaders: Everyone
Keywords: open-set recognition, out-of-distribution detection
TL;DR: We curate class splits from ImageNet-21K which are stratified by semantic similarity to ImageNet-1K, to enable principled analysis of semantic distribution shift
Abstract: Most benchmarks for detecting semantic distribution shift do not consider how the semantics of the training set are defined. In other words, it is often unclear whether the ‘unseen’ images contain semantically different objects from the same distribution (e.g ‘birds’ for a model trained on ‘cats’ and ‘dogs’) or to a different distribution entirely (e.g Gaussian noise for a model trained on ‘cats’ and ‘dogs’). In this work, we propose ‘open-set’ class splits for models trained on ImageNet-1K which come from ImageNet-21K. Critically, we structure the open-set classes based on semantic similarity to the closed-set using the WordNet hierarchy — we create ‘Easy’ and ‘Hard’ open-set splits to allow more principled analysis of the se- mantic shift phenomenon. Together with similar challenges based on FGVC datasets, these evaluations comprise the ‘Semantic Shift Benchmark’.
Submission Type: Full submission (technical report + code/data)
Supplement: zip
Co Submission: No I am not submitting to the dataset and benchmark track and will complete my submission by June 3.
0 Replies

Loading