Learning to repair: Repairing model output errors after deployment using a dynamic memory of feedbackDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=RlwvZrvcv8
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: Large language models (LMs), while powerful, are not immune to mistakes, but can be difficult to retrain. Our goal is for an LM to continue to improve after deployment, without retraining, using feedback from the user. Our approach pairs an LM with (i) a growing memory of cases where the user identified an output error and provided general feedback on how to correct it (ii) a {\it corrector model}, trained to translate this general feedback into specific edits to repair the model output. Given a new, unseen input, our model can then use feedback from similar, past cases to repair output errors that may occur. We instantiate our approach using an existing, fixed model for {\it script generation}, that takes a goal (e.g., ``bake a cake'') and generates a partially ordered sequence of actions to achieve that goal, sometimes containing errors. Our memory-enhanced system, \ours, learns to apply user feedback to repair such errors (up to 30 points improvement), while making a start at avoiding similar past mistakes on new, unseen examples (up to 7 points improvement in a controlled setting). This is a first step towards strengthening deployed models, potentially broadening their utility. Our code and data is available at \url{https://github.com/allenai/interscript}
Presentation Mode: This paper will be presented in person in Seattle
Copyright Consent Signature (type Name Or NA If Not Transferrable): Niket Tandon
Copyright Consent Name And Address: Allen Institute for AI, 2157 N Northlake Way, Seattle 98103
0 Replies

Loading