Keywords: Realignment, Reasoning, Dialougue, Hybrid model
TL;DR: We propose a flexible realignment framework that enables efficient and controllable realignment of LLMs during both training and inference, addressing challenges in reasoning efficiency and personalized response balancing.
Abstract: Realignment becomes necessary when a language model (LM) fails to meet expected performance. We propose a flexible realignment framework that supports quantitative control of alignment degree during training and inference. This framework incorporates **Training-time Realignment (TrRa)**, which efficiently realigns the reference model by leveraging the controllable fusion of logits from both the reference and already aligned models. For example, TrRa reduces token usage by **54.63%** on DeepSeek-R1-Distill-Qwen-1.5B without any performance degradation, outperforming DeepScaleR-1.5B’s **33.86%**.
To complement TrRa during inference, we introduce a layer adapter that enables **smooth Inference-time Realignment (InRa)**. This adapter is initialized to perform an identity transformation at the bottom layer and is inserted preceding the original layers. During inference, input embeddings are simultaneously processed by the adapter and the original layer, followed by the remaining layers, and then controllably interpolated at the logit level. We upgraded DeepSeek-R1-Distill-Qwen-7B from a slow-thinking model to one that supports both fast and slow thinking, allowing flexible alignment control even **during inference**. By encouraging deeper reasoning, it even surpassed its original performance.
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 19974
Loading