Continuous Language Model Interpolation for Dynamic and Controllable Text Generation

Published: 10 Oct 2024, Last Modified: 19 Nov 2024AFM 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: weight interpolation, controllable text generation, adaptive controllable text generation
TL;DR: We show that a fixed set of anchor models can be used to dynamically adjust to fine-grained user preferences across multiple style attributes at once..
Abstract: As large language models (LLMs) have gained popularity for a variety of use cases, making them adaptable and controllable has become increasingly important, especially for user-facing applications. While the existing literature on LLM adaptation primarily focuses on finding methods that optimize over a fixed set of attribute classes, here we focus on the challenging continuous case where the model must dynamically adapt to diverse ---and often changing--- user preferences within predefined attribute ranges. For this, we leverage adaptation methods based on linear weight interpolation, casting them as continuous multi-domain interpolators that produce models with specific prescribed generation characteristics on-the-fly. Specifically, we use low-rank updates to fine-tune a base model to various different domains, yielding a set of anchor models with distinct generation profiles. Then, we use the weight updates of these anchor models to parametrize the entire (infinite) class of models contained within their convex hull. We empirically show that varying the interpolation weights yields predictable and consistent change in the model outputs with respect to all of the controlled attributes. We find that there is little entanglement between most attributes. Our results suggest that linearly interpolating between the weights of fine-tuned models facilitates predictable, fine-grained control of model outputs with respect to multiple stylistic characteristics simultaneously.
Submission Number: 146
Loading