Abstract: Generating multiple text sequences and refining them through feedback is essential for improving the quality of outputs in many NLP tasks. While Large Language Models can leverage iterative feedback during inference, smaller models often lack this capability due to limited capacity and the absence of suitable training paradigms. In this paper, we propose a novel Feedback-Aware inference approach that enables iterative sequence generation with integration of feedback signals. Our method allows models to generate multiple sequences, incorporate feedback from previous iterations, and refine outputs accordingly. This approach dynamically adjusts to different quality metrics, making it adaptable to various contexts and objectives. We evaluate our approach on two distinct tasks: Answer Selection for Question Generation and Keyword Generation, arguing for its generalizability and effectiveness. Results show that our method outperforms strong baselines, maintaining high performance across iterations and achieving superior results even with smaller, open-source models.
Paper Type: Long
Research Area: Generation
Research Area Keywords: inference methods, human-in-the-loop, interactive and collaborative generation
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 2230
Loading