Abstract: Standard deep learning systems require thousands or millions of examples to learn a concept, and cannot integrate new concepts easily. By contrast, humans have an incredible ability to do one-shot or few-shot learning. For instance, from just hearing a word used in a sentence, humans can infer a great deal about it, by leveraging what the syntax and semantics of the surrounding words tells us. Here, we draw inspiration from this to highlight a simple technique by which deep recurrent networks can similarly exploit their prior knowledge to learn a useful representation for a new word from little data. This could make natural language processing systems much more flexible, by allowing them to learn continually from the new words they encounter.
TL;DR: We highlight a technique by which natural language processing systems can learn a new word from context, allowing them to be much more flexible.
Keywords: One-shot learning, embeddings, word embeddings, natural language processing, NLP
Data: [Penn Treebank](https://paperswithcode.com/dataset/penn-treebank)
10 Replies
Loading