Promoting human-AI interaction makes a better adoption of deep reinforcement learning: a real-world application in game industry

Published: 01 Jan 2024, Last Modified: 07 Mar 2025Multim. Tools Appl. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep reinforcement learning (DRL) has been widely employed in game industry, mainly for building automatic game agents. While its performance and efficiency has significantly outperformed traditional approaches, the lack of model transparency constrains the interaction between the model and the human operators, thus degrading the practicality of DRL methods. In this paper, we propose to mitigate this human-AI interaction issue in a game industry scenario. Previously, existing methods need repetitive execution of DRL or are designed towards specific tasks, which are not applicable for our deployment scenario. Considering that different games could have different DRL AI agents, we hereby develop a post-hoc explanation framework which regards original DRL as a black-box model and can be applicable to any DRL based agents. Within the framework, a specially selected student model, which has been already well explored for model explanation, is employed to learn the decision policies of the trained DRL model. Then, by giving explanation information for the student model, indirect but practical explanation results can be obtained for original DRL model. Based on this information, the interaction between human and AI agents can be enhanced, benefiting deployment of DRL. Finally, based on the dataset from a real-world production game, we conduct experiments and user studies to illustrate the effectiveness of the proposed procedure from both objective and subjective perspectives.
Loading