Keywords: Control, Alignment
TL;DR: Alignment research should integrate formal control theory to improve its reliability
Abstract: This position paper argues that formal optimal control theory should be central to AI alignment research, offering a distinct perspective from prevailing existing AI safety and security approaches. While recent work in AI safety and mechanistic interpretability has advanced formal methods for alignment, they often fall short of the generalisation required of other control frameworks required of other technologies. There is also a lack of research into how to render different alignment/control protocols interoperable. We argue that by recasting alignment through principles of formal optimal control and framing alignment in terms of hierarchical stack from physical to sociotechnical layers according to which controls may be applied we can develop a better understanding of the potential and limitations for controlling frontier models and agentic AI systems. To this end, we introduce an \textit{Alignment Control Stack} and formal methods to address these challenges and demonstrate their utility in simulated experiments. We argue that such analysis is also key to the assurances that will be needed by governments and regulators in order to see AI technologies sustainability benefit the community. Our position is that doing so will bridge the well-established and empirically validated methods of optimal control with practical deployment considerations to create a more comprehensive alignment framework, enhancing how we approach safety and reliability for advanced AI systems.
Submission Number: 218
Loading