Keywords: Prototype Transformer (ProtoT); prototype-based language models; interpretable reasoning; nameable concept discovery; targeted model editing; linear-time sequence modelling; transformer alternatives; robustness to input perturbations; causal effects; autoregressive LMs; language models; fine-tuning; downstream performance
TL;DR: We introduce ProtoT, a linear-compute prototype-based alternative to transformer LMs that forms nameable concepts via two-way sequence-prototype communication, enabling interpretability, targeted edits, and competitive performance and robustness.
Abstract: While state-of-the-art language models (LMs) surpass the vast majority of humans in certain domains, their reasoning remains largely opaque, undermining trust in their output. Furthermore, while autoregressive LMs can output explicit reasoning, their true reasoning process is opaque, which introduces risks like deception and hallucination. In this work, we introduce the Prototype Transformer (ProtoT) -- an autoregressive LM architecture based on prototypes (parameter vectors), posed as an alternative to the standard self-attention-based transformers. ProtoT works by means of two-way communication between the input sequence and the prototypes, and we show that this leads to the prototypes automatically capturing nameable concepts (e.g. "woman") during training. They provide the potential to interpret the model's reasoning and execute targeted edits of its behavior. Furthermore, by design, the prototypes create communication channels that aggregate contextual information at different time scales, aiding interpretability.
In terms of computation scalability, ProtoT scales linearly with sequence length vs the quadratic scalability of SOTA self-attention transformers. Compared to baselines, ProtoT scales well with model and data size, and achieves good performance on downstream benchmarks (GLUE). ProtoT exhibits robustness to input perturbations on par or better than some baselines, but differs from them by providing interpretable pathways showing how robustness and sensitivity arises. Reaching close to the performance of state-of-the-art architectures, ProtoT paves the way towards creating well-performing autoregressive LMs interpretable by design.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 25080
Loading