Speeding Up the NSGA-II via Dynamic Population Sizes

Benjamin Doerr, Martin S. Krejca, Simon Wietheger

Published: 2025, Last Modified: 06 May 2026CoRR 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Multi-objective evolutionary algorithms (MOEAs) are among the most widely and successfully applied optimizers for multi-objective problems. However, to store many optimal trade-offs (the Pareto optima) at once, MOEAs are typically run with a large, static population of solution candidates, which can slow down the algorithm. We propose the dynamic NSGA-II (dNSGA-II), which is based on the popular NSGA-II and features a non-static population size. The dNSGA-II starts with a small initial population size of four and doubles it after a user-specified number $τ$ of function evaluations, up to a maximum size of $μ$. Via a mathematical runtime analysis, we prove that the dNSGA-II with parameters $μ\geq 4(n + 1)$ and $τ\geq \frac{256}{50} e n$ computes the full Pareto front of the \textsc{OneMinMax} benchmark of size $n$ in $O(\log(μ) τ+ μ\log(n))$ function evaluations, both in expectation and with high probability. For an optimal choice of $μ$ and $τ$, the resulting $O(n \log(n))$ runtime improves the optimal expected runtime of the classic NSGA-II by a factor of $Θ(n)$. In addition, we show that the parameter $τ$ can be removed when utilizing concurrent runs of the dNSGA-II. This approach leads to a mild slow-down by a factor of $O(\log(n))$ compared to an optimal choice of $τ$ for the dNSGA-II, which is still a speed-up of $Θ(n / \log(n))$ over the classic NSGA-II.
Loading