Negotiative Alignment: An interactive approach to human-AI co-adaptation for clinical applications

Published: 06 Mar 2025, Last Modified: 21 Apr 2025ICLR 2025 Bi-Align Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Negotiative alignment, Radiology, Clinical AI, Behavioral alignment, Dynamic and lifelong alignment, Human-in-the-loop learning, Continuous error detection, High-stakes alignment, Co-adaptation, Healthcare AI, Situated interaction, Co-evolving alignment
TL;DR: We introduce a framework for iterative, feedback-driven alignment in clinical radiology AI, showing how error detection and graded severity scoring can guide more aligned and trustworthy AI.
Abstract: We introduce a conceptual framework for ***negotiative alignment*** in high-stakes clinical AI, where human experts iteratively refine AI outputs rather than a binary accept/rejection. This approach uses graded feedback---including partial acceptance of useful insights---to systematically flag and score different types of clinical AI output errors. Although we do not present finalized experimental results, we outline a proof-of-concept using a chest radiograph image-report dataset and a multimodal model. These severity-scored errors might guide future targeted model updates. Negotiative alignment grounds each AI-generated report in a continuous, co-adaptive dialogue with clinicians, which has the potential to boost trust, transparency, and reliability in medical diagnostics and beyond.
Submission Type: Tiny Paper (2 Pages)
Archival Option: This is an archival submission
Presentation Venue Preference: ICLR 2025
Submission Number: 98
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview