Abstract: Recent advances in Large Language Models (LLMs) allow agents to execute complex natural language tasks. Many LLM applications, such as support agents, teaching assistants, and interactive bots, involve multi-turn conversations. However, it remains challenging to control LLMs in the context of such interactions, particularly when the LLM behavior needs to be adjustable over the course of the conversation. In this paper, we present Retcon, a prompting technique designed to provide turn-level control over LLMs in conversations. We then demonstrate that it performs significantly better than traditional techniques such as zero-shot and few-shot prompting.
Paper Type: Short
Research Area: Dialogue and Interactive Systems
Research Area Keywords: Dialogue and Interactive Systems, Generation, Language Modeling, NLP Applications,
Contribution Types: NLP engineering experiment, Theory
Languages Studied: English
Submission Number: 875
Loading