Robust Continuous Build-Order Optimization in StarCraftDownload PDFOpen Website

2019 (modified: 07 Nov 2022)CoG 2019Readers: Everyone
Abstract: To solve complex real-world planning problems it is often beneficial to decompose tasks into high-level and low-level components and optimize actions separately. Examples of such modularization include car navigation (a high-level path planning problem) and obstacle avoidance (a lower-level control problem), and decomposing playing policies in modern video games into strategic ("macro") and tactical ("micro") components. In real-time strategy (RTS) video games such as StarCraft, players face decision problems ranging from economic development to maneuvering units in combat situations. A popular strategy employed in building AI agents for complex games like StarCraft is to use this strategy of task decomposition to construct separate AI systems for each of these sub-problems, combining them to form a complete game-playing agent. Existing AI systems for such games often contain build-order planning systems that attempt to minimize makespans for constructing specific sets of units, which are typically decided by hand-coded human expert knowledge rules. Drawbacks of this approach include the human expert effort involved in constructing these rules, as well as a lack of online adaptability to unforeseen circumstances, which can lead to brittle behavior that can be exploited by more advanced opponents. In this paper we introduce a new robust build-order planning system for RTS games that automatically produces build-orders which optimize unit compositions toward strategic game concepts (such as total unit firepower), without the need for specific unit goals. When incorporated into an existing StarCraft AI agent in a real tournament setting, it outperformed the previous state-of-the-art planning system which relied on human expert knowledge rules for deciding unit compositions.
0 Replies

Loading