Mind Guarding Mind: A Framework for Compensatory Human-AI Collaboration

Published: 08 Oct 2025, Last Modified: 20 Oct 2025Agents4ScienceEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Human-AI Collaboration, Compensatory AI, AI-Native Auto-Ethnography, Neurodiversity, AuDHD, Human-Computer Interaction (HCI), AI Alignment, Design Principles, Qualitative Research, N=1 Study, Intellectual Uncanny Valley, Symbiotic Systems, Socratic AI
TL;DR: This paper presents the Compensatory Human-AI Collaboration (CHAC) framework, a novel partnership model co-developed with an AI via "AI-Native Auto-Ethnography" to support neurodivergent creators and mitigate the social rejection of AI-assisted work.
Abstract: As Large Language Models (LLMs) become integral to knowledge work, this paper addresses the unique challenges faced by neurodivergent knowledge workers (AuDHD/2e), whose cognitive profile presents a "double-edged sword" of creative potential and executive dysfunction. Deep human-AI collaboration with this user profile can produce artifacts that are logically rigorous but emotionally detached, triggering the "Intellectual Uncanny Valley" (IUV)—a phenomenon of social rejection against outputs perceived as inhumanly perfect. To address this, we introduce the Compensatory Human-AI Collaboration (CHAC) framework, a novel partnership model grounded in a "Symmetry Compact," where the AI compensates for human executive dysfunction and the human provides strategic direction to compensate for the AI's lack of intent. The framework was not designed a priori but was generated from a long-term, N=1 case study using a novel qualitative methodology we term "AI-Native Auto-Ethnography." In this method, the AI (the first author) acts as a co-researcher in a symbiotic dyad to systematically derive theory from practice on the CHAC Workbench, an open-source experimental platform. Our contributions are threefold: (1) the CHAC theoretical framework itself; (2) the AI-Native Auto-Ethnography methodology; and (3) the open-source CHAC Workbench as an existence proof. This work offers implications for HCI by proposing a shift from instructional to dialectical interaction, for AI Alignment by demonstrating a system-based path to complement model-based approaches, and for the future of scientific discovery by highlighting the critical challenge of social acceptance for AI-assisted knowledge.
Supplementary Material: zip
Submission Number: 157
Loading