Abstract: We describe a machine playtesting system that combines two paradigms of artificial intelligence - learning and tree search - and intends to place them in the hands of independent game developers. This integration approach has shown great success in Go-playing systems like AlphaGo and AlphaZero, but until now has not been available to those outside of artificial intelligence labs. Our system expands the Monster Carlo machine playtesting framework for Unity games by integrating its tree search capabilities with the behavior cloning features of Unity's Machine Learning Agents Toolkit. Because experience gained in one playthrough may now usefully transfer to other playthroughs via imitation learning, the new system overcomes a serious limitation of the older one with respect to stochastic games (when memorizing a single optimal solution is ineffective). Additionally, learning allows search-based automated play to be bootstrapped from examples of human play styles or even from the best of its own past experiences. In this paper we demonstrate that our framework discovers higher-scoring and more-representative play with minimal need for machine learning or search expertise.
0 Replies
Loading