Abstract: In many domains, dialogue systems need to work collaboratively with users to successfully reconstruct the meaning the user had in mind. In this paper, we show how cognitive models
of users’ communicative strategies can be leveraged in a reinforcement learning approach to
dialogue planning to enable interactive systems to give targeted, effective feedback about the
system’s understanding. We describe a prototype system that collaborates on reference tasks
that distinguish arbitrarily varying color patches from similar distractors, and use experiments
with crowd workers and analyses of our learned policies to document that our approach leads
to context-sensitive clarification strategies that focus on key missing information, elicit correct
answers that the system understands, and contribute to increasing dialogue success.
0 Replies
Loading