Zero-Shot Learning Through Cross-Modal TransferDownload PDF

25 Apr 2024 (modified: 17 Jan 2013)ICLR 2013 conference submissionReaders: Everyone
Decision: conferenceOral-iclr2013-workshop
Abstract: This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semantic basis for understanding what objects look like. Most previous zero-shot learning models can only differentiate between unseen classes. In contrast, our model can both obtain state of the art performance on classes that have thousands of training images and obtain reasonable performance on unseen classes. This is achieved by first using outlier detection in the semantic space and then two separate recognition models. Furthermore, our model does not require any manually defined semantic features for either words or images.
4 Replies

Loading