Abstract: Aligning large language models (LLMs) with human preferences is a central challenge for building reliable AI systems.
Most existing alignment approaches rely on static signals, such as predefined principles or offline human annotations to guide model behavior toward a fixed approximation of human preferences.
However, LLMs can exhibit distributional drift during training, and static alignment mechanisms lack the capacity to adaptively correct misaligned behaviors as they emerge.
To address this limitation, we develop a two-stage framework that enables dynamic and continuous alignment.
In the first stage, a constitution is continually revised based on observed model behaviors, and models are trained to comply with these evolving principles.
In the second stage, this learned constitution is used to guide reinforcement learning, encouraging the model to align with the updated normative signals.
We refer to this framework as COCOA: Co-evolution of Constitutions and AI Models.
We show that COCOA enables a 7B model to greatly improve safety—raising StrongReject score from 0.741 to 0.935 and Safe-RLHF accuracy from 77.76\% to 90.64\% without human annotations, reaching performance close to much larger state-of-the-art models.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: safety and alignment
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 4148
Loading