Stable and Diverse Strategy Learning via Diffusion-Based Co-Evolution in StarCraft II Combat

20 Sept 2025 (modified: 03 Dec 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: competitive multi-agent environment, real-time strategy games, diffusion evolution
Abstract: Effective learning algorithms for agents in multi-agent environments remain a central challenge due to inter-agent dependencies during both training and evaluation. This challenge is amplified by the curl of the fitness landscape, which induces cyclic trajectories -- especially in competitive settings. To address these challenges, we propose Diffusion Co-evolution, an evolutionary learning framework inspired by diffusion processes. Our method facilitates robust opponent matching by enabling broad exploration across a diverse agent population. It leverages quality diversity to identify multiple high-performing strategies, even in environments with cyclic learning dynamics. Experiments in a StarCraft II combat environment demonstrate Diffusion Co-evolution’s superior stability and strategic diversity compared to conventional co-evolutionary baselines.
Primary Area: learning on time series and dynamical systems
Submission Number: 23255
Loading