Retrospective Learning from Interactions

24 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: continual learning, natural language processing, interactive learning, reinforcement learning
TL;DR: Methods for LLM continual learning from implicit feedback in human-LLM interactions without any annotation.
Abstract: Multi-turn language interactions naturally include implicit feedback signals. For example, if a listener responds in an unexpected way to an instruction, the instructor may rephrase it, express frustration, or pivot to an alternative task. These signals are task-independent and occupy a relatively constrained subspace of language, allowing a language model to identify them even if it fails on the actual task. This holds the promise of continually learning and improving from interactions without additional annotations. We introduce *ReSpect*, a method to learn from signals in past interactions via retrospection. We deploy *ReSpect* in a new multimodal interaction scenario, where humans instruct a multimodal LLM to solve an abstract reasoning task with a combinatorial solution space. Through thousands of interactions with humans, we show how *ReSpect* gradually improves task completion rate from 31\% to 82\%, all without any external annotation.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3970
Loading