FlexModel: A Framework for Interpretability of Distributed Large Language Models

Published: 23 Oct 2023, Last Modified: 28 Nov 2023SoLaR SpotlightEveryoneRevisionsBibTeX
Keywords: Foundation Models, Large Language Models, Interpretability, Safety, Distributed Systems
Abstract: With the rise of Large Language Models (LLMs) characterized by billions of parameters, the hardware prerequisites for their training and deployment have seen a corresponding increase. Although existing tools facilitate model parallelization and distributed training, deeper model interactions, crucial for interpretability and responsible AI techniques, demand thorough knowledge in distributed computing. This complexity often hampers researchers with machine learning expertise but limited distributed computing background. Addressing this challenge, we present FlexModel, a software package crafted to offer a streamlined interface for engaging with large models across multi-GPU and multi-node configurations. FlexModel is compatible with existing technological frameworks and encapsulates PyTorch models. Its HookFunctions facilitate simple interaction with distributed model internals, bridging the gap between distributed and single-device model handling paradigms. Our work's primary contribution FlexModel democratizes model interactions, and we validate it in two large-scale experimental contexts: Transformer Induction Head Isolation and the TunedLens implementation. FlexModel enhances accessibility and promotes more inclusive research in the domain of large-scale neural networks. The package is found at https://github.com/VectorInstitute/flex_model.
Submission Number: 77
Loading