Knowledge Learning-Based Dimensionality Reduction for Solving Large-Scale Sparse Multiobjective Optimization Problems

Published: 2025, Last Modified: 06 Nov 2025IEEE Trans. Cybern. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Large-scale sparse multiobjective optimization problems (LSMOPs) are of great significance in the context of practical applications, such as critical node detection, feature selection, and pattern mining. Since many LSMOPs are pursued based on large datasets, they involve a large number of decision variables, resulting in a huge search space that is challenging to explore efficiently. To rapidly approximate sparse Pareto optimal solutions, some evolutionary algorithms have been proposed to reduce the dimensionality of LSMOPs. However, their adaptability to different LSMOPs remains limited due to their reliance on fixed dimensionality reduction schemes, which can potentially lead to local optima and inefficient utilization of function evaluations. To address this issue, a knowledge learning-based dimensionality reduction approach is proposed in this article. First, in the early stages of evolution, the impact of different dimensionality reduction schemes on the sparse distribution of the population is evaluated. Then, the multilayer perceptron is employed to learn the accumulated knowledge from the evolutionary process, thereby constructing a mapping model between the sparse features of the evolutionary process and the candidate dimensionality reduction schemes. Finally, the model recommends the best dimensionality reduction scheme in each generation, achieving a good balance between exploration and exploitation. Experimental evaluations on both benchmark and real-world LSMOPs demonstrate that an evolutionary algorithm incorporating the proposed knowledge learning-based dimensionality reduction approach outperforms most existing evolutionary algorithms.
Loading