Automated Configuration of Evolutionary Algorithms via Deep Reinforcement Learning for Constrained Multiobjective Optimization

Published: 2025, Last Modified: 07 Jan 2026IEEE Trans. Cybern. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Learning to optimize and automated algorithm design are attracting increasing attention, but it is still in its infancy in constrained multiobjective optimization evolutionary algorithms (CMOEAs). Current learning-assisted CMOEAs are typically crafted by human experts using manually designed techniques, which tend to be overly tuned, ad hoc, and lacking versatility. To alleviate these limitations, this work proposes transforming the online configuration of CMOEA into determinations of discrete and continuous parameters, which are then solved by deep reinforcement learning (DRL) techniques. Specifically, the Actor–Critic framework is adapted to determine a factor that defines the environmental selection pressure. The deep Q-learning technique is adopted to determine the operators for producing offspring. Owing to the property of DRL, the configured algorithm can accommodate historical experience, current evolutionary dynamics, and future improvements to achieve self-learning. A new CMOEA is proposed using the automatically configured evolutionary algorithm. Experiments on four challenging benchmarks and 21 real-world problems verify that our method significantly outperforms 11 state-of-the-art methods. The versatility and superiority of the automatically configured environment and operators over handcrafted methods justify the effectiveness of the automated configuration method, demonstrating a promising direction in evolutionary multiobjective optimization.
Loading