Improving Sample-Efficiency in Reinforcement Learning for Dialogue Systems by Using Trainable-Action-Mask

Published: 01 Jan 2020, Last Modified: 12 May 2025ICASSP 2020EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: By interacting with human and learning from reward signals, reinforcement learning is an ideal way to build conversational AI. Concerning the expenses of real-users' responses, improving sample-efficiency has been the key issue when applying reinforcement learning in real-world spoken dialogue systems (SDS). Handcrafted action masks are commonly used to rule out impossible actions and accelerate the training process. However, the handcrafted action mask can barely be generalized to unseen domains. In this paper, we propose trainable-action-mask (TAM) which learns from data automatically without handcrafting complicated rules. In our experiments in Cambridge Restaurant domain, TAM requires only 30% of training data, compared with the baseline, to reach the 80% success rate and it also shows robustness to noisy environments.
Loading