Open Character Training: Shaping the Persona of AI Assistants Through Constitutional AI

20 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: llm, persona, character training, large language models, alignment, value alignment, ai safety, ai ethics, constitutional ai, open-source
TL;DR: We introduce the first open-source implementation of Character Training, shaping the values, beliefs, and ethics of the assistant persona in a more effective and controlled manner than alternatives like prompting or activation steering.
Abstract: The character of the "AI assistant" persona generated by modern chatbot large language models influences both surface-level behavior and apparent values, beliefs, and ethics. These all affect interaction quality, perceived intelligence, and alignment with both developer and user intentions. The shaping of this persona, known as character training, is a critical component of industry post-training, yet remains effectively unstudied in the academic literature. We introduce the first open implementation of character training, leveraging Constitutional AI and a new data pipeline using synthetic introspective data to shape the assistant persona in a more effective and controlled manner than alternatives such as constraining system prompts or activation steering. Specifically, we fine-tune three popular open-weights models using 11 example personas, such as humorous, deeply caring, or even malevolent. To track the effects of our approach, we introduce a method which analyzes revealed preferences, uncovering clear and holistic changes in character. We then find these changes are more robust to adversarial prompting than the above two alternatives, while also leading to more coherent and realistic generations. We also demonstrate this fine-tuning has little to no effect on general capabilities as measured by common benchmarks. We describe and open-source our full post-training method, the implementation of which can be found at https://anonymous.4open.science/r/OpenCharacterTraining.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 22267
Loading