The Learnability of Model-Theoretic Interpretation Functions in Artificial Neural Networks
Keywords: Systematicity, Compositionality, Out-of-training-sample Generalization, Artificial Neural Networks, Formal Semantics, Model-Theoretic Interpretation, Entity Type Information, Truth Value Type Information
TL;DR: We study the learnability of model-theoretic interpretation functions and find that attention architectures are better than simple RNs only if the semantic representation used in training includes entity information in addition to truth-conditions.
Submission Number: 31
Loading