Semantic Supervision: Enabling Generalization over Output SpacesDownload PDF

13 Mar 2022 (modified: 20 Oct 2024)LNLSReaders: Everyone
TL;DR: We propose a unified paradigm for supervised classification called semantic supervision (SemSup) which enables generalization to unseen outputs (e.g., classes, superclasses, and tasks).
Abstract: In this paper, we propose semantic supervision (SemSup) - a unified paradigm for training classifiers that generalize over output spaces. In contrast to standard classification, which treats classes as discrete symbols, SemSup represents them as dense vector features obtained from descriptions of classes (e.g., "The cat is a small carnivorous mammal"). This allows the output space to be unbounded (in the space of descriptions) and enables models to generalize both over unseen inputs and unseen outputs. Specifically, SemSup enables four types of generalization, to - (1) unseen class descriptions, (2) unseen classes, (3) unseen super-classes, and (4) unseen tasks. Through experiments on four classification datasets, two input modalities (text and images), and two output description modalities (text and JSON), we show that our SemSup models significantly outperform baselines. For instance, our model outperforms baselines by $40\%$ and $15\%$ precision points on unseen descriptions and classes, respectively, on a news categorization dataset (RCV1). SemSup can serve as a pathway for scaling neural models to large unbounded output spaces and enabling better generalization and model reuse for unseen tasks and domains.
Track: Non-Archival (will not appear in proceedings)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/semantic-supervision-enabling-generalization/code)
0 Replies

Loading