Abstract: Deep learning (DL) has in recent years been widely used in natural
language processing (NLP) applications due to its superior
performance. However, while natural languages are rich in
grammatical structure, DL has not been able to explicitly
represent and enforce such structures. This paper proposes a new
architecture to bridge this gap by exploiting tensor product
representations (TPR), a structured neural-symbolic framework
developed in cognitive science over the past 20 years, with the
aim of integrating DL with explicit language structures and rules.
We call it the Tensor Product Generation Network
(TPGN), and apply it to image captioning. The key
ideas of TPGN are: 1) unsupervised learning of
role-unbinding vectors of words via a TPR-based deep neural
network, and 2) integration of TPR with typical DL architectures
including Long Short-Term Memory (LSTM) models. The novelty of our
approach lies in its ability to generate a sentence and extract
partial grammatical structure of the sentence by using
role-unbinding vectors, which are obtained in an unsupervised
manner. Experimental results demonstrate the effectiveness of the
proposed approach.
TL;DR: This paper is intended to develop a tensor product representation approach for deep-learning-based natural language processinig applications.
Keywords: Deep learning, tensor product representation, LSTM, image captioning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/a-neural-symbolic-approach-to-natural/code)
Withdrawal: Confirmed
6 Replies
Loading