Towards Effective and Efficient Zero-shot Learning by Fine-tuning with Task DescriptionsDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
Abstract: While current machine learning models have achieved great success with labeled data, we have to deal with classes that have little or no training data in many real-world applications. This leads to the study of zero-shot learning. The typical approach in zero-shot learning is to embed seen and unseen classes into a shared space using class meta-data, and construct classifiers on top of that. Yet previous methods either still require significant manual labor in obtaining useful meta-data, or utilize automatically collected meta-data while trading in performance. To achieve satisfactory performance under practical meta-data efficiency constraint, we propose \textbf{N\textsuperscript{3}} (\textbf{N}eural \textbf{N}etworks from \textbf{N}atural Language), a meta-model that maps natural language class descriptions to corresponding neural network classifiers. N\textsuperscript{3} leverages readily available online documents combined with pretrained language representations such as BERT to obtain expressive class embeddings. In addition, N\textsuperscript{3} generates parameter adaptations for pretrained neural networks using these class embeddings, effectively ``finetuneing'' the network to classify unseen classes. Our experiments show that N\textsuperscript{3} is able to outperform previous methods across 8 different benchmark evaluations and we show through ablation studies the contribution of each model component. To offer insight into how N\textsuperscript{3} ``finetunes'' the pretrained network, we also performed a range of qualitative and quantitative analysis. Our code will be released after the review period.
Keywords: zero-shot learning, meta-learning, convolutional neural, dynamic parameter generation
Original Pdf: pdf
4 Replies

Loading