Keywords: symbolic reasoning, syntactic processing, production systems, compositionality, in-context learning, large language models, transformers, neural architecture
TL;DR: We construct a 100% mechanistically-explainable transformer which perfectly performs an in-context learning task that requires inferring, and then reasoning over, latent syntactic structure.
Abstract: We construct a 100% mechanistically-explainable transformer which perfectly performs an in-context learning task that requires inferring, and then reasoning over, latent syntactic structure. It implements a program in a symbolic, Turing-complete language in a family of leading models of the human cognitive architecture.
Submission Number: 47
Loading