Keywords: Entropy reward,Information gain,Socratic paradigm
Abstract: A fundamental bottleneck in human-AI collaboration is the ``intention expression gap," the difficulty for humans to effectively convey complex, high-dimensional thoughts to AI.
This challenge often traps users in inefficient trial-and-error loops and is exacerbated by the diverse expertise levels of users.
We reframe this problem from passive instruction following to a Socratic collaboration paradigm, proposing an agent that actively probes for information to resolve its uncertainty about user intent.
we name the proposed agent Nous, trained to acquire proficiency in this inquiry policy.
The core mechanism of Nous is a training framework grounded in the first principles of information theory.
Within this framework, we define the information gain from dialogue as an intrinsic reward signal, which is fundamentally equivalent to the reduction of Shannon entropy over a structured task space.
This reward design enables us to avoid reliance on costly human preference annotations or external reward models.
To validate our framework, we develop an automated simulation pipeline to generate a large-scale, preference-based dataset for the challenging task of scientific diagram generation.
Comprehensive experiments, including ablations, subjective and objective evaluations, and tests across user expertise levels, demonstrate the effectiveness of our proposed framework. Nous achieves leading efficiency and output quality, while remaining robust to varying user expertise.
Moreover, its design is domain-agnostic, and we show evidence of generalization beyond diagram generation.
Experimental results prove that our work offers a principled, scalable, and adaptive paradigm for resolving uncertainty about user intent in complex human-AI collaboration.
Supplementary Material: zip
Primary Area: learning theory
Submission Number: 375
Loading