Aligning Compound AI Systems via System-level DPO

Published: 25 Feb 2025, Last Modified: 25 Feb 2025MARW at AAAI 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: alignment, compound AI system, preference learning, DPO, multi-agent
Abstract: Compound AI systems, comprising multiple interacting components such as LLM agents and external tools, demonstrate state-of-the-art results across diverse tasks. It is hence crucial to align components within the system to produce consistent results that match human expectations. However, conventional alignment methods, such as Direct Preference Optimization (DPO), are not directly applicable to compound AI systems. These challenges include the non-differentiable interactions between components, which make end-to-end gradient optimization infeasible. Additionally, system-level preferences cannot be directly translated into component level preferences, further complicating alignment. We address the issues by formulating compound AI systems as Directed Acyclic Graphs (DAGs), capturing the connections between agents and the data generation processes. We propose a system-level DPO (SysDPO) for jointly aligning compound systems by adapting DPO to operate on these DAGs. We study the joint alignment of an LLM and a diffusion model to demonstrate the effectiveness of our approach. Our exploration provides insights into the alignment of compound AI systems and lays a foundation for future advancements.
Submission Number: 14
Loading