Keywords: game, test, automation, imitation, pixel, bot, learning, reinforcement, reward
Abstract: Automated game inspection is increasingly crucial for maintaining the quality of complex 3D gaming environments. However, most current automation approaches are deterministic and require intrusive integration with the game engine. Artificial intelligence (AI) agents trained via imitation learning (IL) present a versatile alternative, as they can learn from quality engineer demonstrations. Despite this potential, deploying AI agents effectively in game inspection faces several obstacles. These challenges include the need for demonstration sample efficiency, the lack of explicit reward signals, restricted access to supplementary modalities or internal game data, and the critical demand for rapid inference speed. To address these issues, we propose an AI agent architecture named PixelBot. This architecture primarily utilizes pixel data (i.e., RGB frames) while maintaining sample efficiency for training with limited data. Our agent training methodology involves a two-stage process: first, a general approach for generating progress rewards from offline demonstrations, followed by return-modulated Behavioral Cloning (BC). We evaluated PixelBot across three Unreal Engine gaming environments, comparing its performance against established BC baselines. Our results demonstrate that PixelBot achieves an optimal balance between test imitation performance and parameter efficiency.
Submission Number: 19
Loading