Context-Aware Alignment: Adapting Large Language Models to Individual Historical Data

18 Sept 2025 (modified: 12 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Individual Alignment, Large Language Model
Abstract: Aligning large language models with human preferences is essential for ensuring their effectiveness, utility, and safety in real-world applications. While much of the current research focuses on aligning LLMs with generalized human values such as fairness, transparency, and ethical behavior, limited attention has been given to aligning LLMs with the preferences and characteristics of individual users. In this paper, we propose a novel approach that leverages individual historical context to achieve personalized alignment, adapting LLMs to align with the unique traits and preferences of specific users. Our method focuses on extracting persona-related representations—abstract features encapsulating conversational style, tone, and preferences—from past user interactions. These representations guide the model in generating responses tailored to the user's individual characteristics. Experimental results demonstrate that our approach significantly outperforms existing baselines, improving the model's ability to reflect individual personas while maintaining contextual appropriateness. This research opens new possibilities for more personalized, context-aware, and user-centric applications of LLMs.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 11064
Loading