Human–AI Safety: A Descendant of Generative AI and Control Systems Safety

TMLR Paper2910 Authors

22 Jun 2024 (modified: 21 Nov 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Artificial intelligence (AI) is interacting with people at an unprecedented scale, offering new avenues for immense positive impact, but also raising widespread concerns around the potential for individual and societal harm. Today, the predominant paradigm for human–AI safety focuses on fine-tuning the generative model’s outputs to better agree with human-provided examples or feedback. In reality, however, the consequences of an AI model’s outputs cannot be determined in isolation: they are tightly entangled with the responses and behavior of human users over time. In this paper, we distill key complementary lessons from AI safety and control systems safety, highlighting open challenges as well as key synergies between both fields. We then argue that meaningful safety assurances for advanced AI technologies require reasoning about how the feedback loop formed by AI outputs and human behavior may drive the interaction towards different outcomes. To this end, we introduce a unifying formalism to capture dynamic, safety-critical human–AI interactions and propose a concrete technical roadmap towards next-generation human-centered AI safety.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Swarat_Chaudhuri1
Submission Number: 2910
Loading