Keywords: LLM, Large Language Model, Feedback, In-place Feedback, Multi-turn, Refinement, Expert-in-the-loop, Human-AI collaboration
Abstract: Recent advances in large language models (LLMs) have made it feasible to obtain high-quality first drafts for complex tasks. However, these drafts often contain subtle factual or logical errors. Even with iterative expert feedback, producing an accurate final answer remains difficult. Prior work has shown that LLMs struggle to reliably incorporate multi-turn feedback. In this work, we introduce in-place feedback, a novel interaction paradigm in which users directly edit an LLM’s previous response. The LLM then conditions on this modified response to generate a revised response. In-place feedback consistently outperforms standard multi-turn feedback across several benchmarks while requiring fewer tokens. Further analysis reveals that in-place feedback applies corrections more reliably and propagates them to subsequent reasoning. Our findings suggest that editing errors directly is a more natural and effective mechanism for LLM-expert collaboration.
Paper Type: Long
Research Area: Human-AI Interaction/Cooperation and Human-Centric NLP
Research Area Keywords: Dialogue and Interactive Systems, Human-Centered NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 5035
Loading