Alignment has a Fantasia Problem
Keywords: human-AI interaction, alignment, behavioral bias, cognitive support
TL;DR: AI should provide cognitive support by actively helping users form and refine their intent through time.
Abstract: Modern AI assistants are trained to follow instructions, implicitly assuming that users can clearly articulate their goals and the kind of assistance they need. Decades of behavioral research, however, show that people often engage with AI systems before their goals are fully formed. When AI systems treat prompts as complete expressions of intent, they can appear to be useful or convenient, but not necessarily aligned with the users' underlying needs. We refer to these failures as *Fantasia interactions*.
We argue that Fantasia interactions demand a rethinking of alignment research: rather than treating users as rational oracles, AI should provide cognitive support by actively helping users form and refine their intent through time. This requires an interdisciplinary approach that bridges machine learning, interface design, and behavioral science. We synthesize insights from these fields to characterize the mechanisms and failure modes for Fantasia interactions. We then show why existing interventions are insufficient, and propose a research agenda for designing and evaluating AI systems that better help humans navigate uncertainty in their everyday tasks.
Paper Type: Blue Sky Paper
Supplementary Material: pdf
Submission Number: 63
Loading