Weight-based Decomposition: A Case for Bilinear MLPs

Published: 24 Jun 2024, Last Modified: 31 Jul 2024ICML 2024 MI Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: bilinear, eigenvector, eigendecomposition, feature extraction, mnist
TL;DR: We leverage the close-to-linear structure of bilinear MLPs to reverse-engineer shallow MNIST and Tiny Stories models using only their weights
Abstract: Gated Linear Units (GLUs) have become a common building block in modern foundation models. Bilinear layers drop the non-linearity in the ``gate'' but still have comparable performance to other GLUs. An attractive quality of bilinear layers is that they can be fully expressed in terms of a third-order tensor and linear operations. Leveraging this, we develop a method to decompose the bilinear tensor into a set of sparsely interacting eigenvectors that show promising interpretability properties in preliminary experiments for shallow image classifiers (MNIST) and small language models (Tiny Stories). Since the decomposition is fully equivalent to the model's original computations, bilinear layers may be an interpretability-friendly architecture that helps connect features to the model weights. Application of our method may not be limited to pre-trained bilinear models since we find that language models such as TinyLlama-1.1B can be finetuned into bilinear variants.
Submission Number: 96
Loading