Towards Safe and Honest AI Agents with Neural Self-Other Overlap

Published: 12 Oct 2024, Last Modified: 14 Nov 2024SafeGenAi OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI Safety, ML Safety, AI Deception, large language models, fine-tuning, reinforcement learning, self-other overlap
TL;DR: We introduce a new general fine-tuning technique called Self-Other Overlap (SOO) designed to reduce AI deception and show that it is effective in LLM and RL experiments.
Abstract: As AI systems increasingly make critical decisions, deceptive AI poses a significant challenge to trust and safety. We present Self-Other Overlap (SOO) fine-tuning, a promising approach in AI Safety that could substantially improve our ability to build honest artificial intelligence. Inspired by cognitive neuroscience research on empathy, SOO aims to align how AI models represent themselves and others. Our experiments with Mistral 7B Instruct v0.2 demonstrate SOO's efficacy: deceptive responses in this large language model dropped from 95.2% to 15.9% with no observed reduction in general task performance, while in reinforcement learning scenarios, SOO-trained agents showed significantly reduced deceptive behavior. SOO's focus on internal representations offers strong potential for generalization across AI architectures. While current applications focus on language models and simple RL environments, SOO could pave the way for more trustworthy AI in broader domains. Ethical implications and long-term effects warrant further investigation, but SOO represents a significant step forward in AI safety research.
Submission Number: 103
Loading