Position: Human Don’t need to Know the Answer to Help: Intuition Steers Disagreement in Multi-Agent LLMs
Keywords: Multi-agent LLMs, Human in the loop, Human feedback, Multi-agent debate
TL;DR: Even without knowing the answer, intuitive human judgments can resolve disagreements in multi-agent LLMs, improving accuracy with minimal cost and no expert knowledge required.
Abstract: This position paper argues that even when humans lack the correct answer or problem-solving expertise, their intuitive judgments can still meaningfully improve the performance of multi-agent LLMs. Collaboratively leveraging multiple LLMs has emerged as an effective strategy to enhance problem solving capabilities by utilizing complementary specializations and enabling mutual verification among agents. However, when disagreements arise, agents following the correct reasoning paths can be misled or overwhelmed by the incorrect ones, resulting in degraded final answers. We show that human feedback, when focused on agent disagreements and presented as simplified binary choices through LLM-generated summaries, even from non-experts, can effectively steer collaborative debates toward more accurate outcomes. Drawing on insights from cognitive science and collective intelligence, we demonstrate that human intuition, despite being uninformed, can provide low-cost, high-impact guidance at inference time. This challenges the prevailing assumption that useful feedback must come from experts, and offers a practical, scalable mechanism for integrating human input into multi-agent AI systems.
Submission Number: 43
Loading