Words in Motion: Extracting Interpretable Control Vectors for Motion Transformers

Published: 10 Oct 2024, Last Modified: 03 Dec 2024IAI Workshop @ NeurIPS 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: control vectors, neural collapse, linear probing, activation steering, representation engineering, alignment
TL;DR: We introduce a method to quantify human-interpretable motion features, assess neural collapse in hidden states, and use the resulting latent space regularities as control vectors for controlling motion forecasts at inference.
Abstract: Transformer-based models generate hidden states that are difficult to interpret. In this work, we aim to interpret these hidden states and control them at inference, with a focus on motion forecasting. We leverage the phenomenon of neural collapse and use linear probes to measure interpretable features in hidden states. Our experiments reveal meaningful directions and distances between hidden states of opposing features, which we use to fit control vectors for activation steering. Consequently, our method enables controlling transformer-based motion forecasting models with interpretable features, providing a unique interface to interact with and understand these models.
Track: Main track
Submitted Paper: No
Published Paper: No
Submission Number: 39
Loading