Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
TRAIN ONCE , TEST ANYWHERE : ZERO-SHOT LEARNING FOR TEXT CLASSIFICATION
Pushpankar Kumar Pushp, Muktabh Mayank Srivastava
Jan 25, 2018 (modified: Jan 25, 2018)ICLR 2018 Workshop Submissionreaders: everyone
Abstract:Zero-shot Learners are models capable of predicting unseen classes. In this work, we propose a Zero-shot Learning approach for text categorization. Our method involves training model on a large corpus of sentences to learn the relationship between a sentence and its tags. Learning such relationship makes the model generalize to unseen sentences, tags, and even new datasets provided they can be put into same embedding space. The model learns to predict whether a given sentence is related to a tag or not; unlike other classifiers that learn to classify the sentence as one of the possible classes. We propose three different neural networks for the task and report their accuracy on the test set of the dataset used for training them as well as two other standard datasets for which no retraining was done. We show that our models generalize well across new unseen classes in both cases. Although the models do not achieve the accuracy level of the state of the art supervised models, yet it evidently is a step forward towards general intelligence in natural language processing.
TL;DR:We introduce a Zero-Shot Learning framework for text classification.
Keywords:Zero-Shot learning, Text Classification, Natural Language Processing, Deep Learning
Enter your feedback below and we'll get back to you as soon as possible.