Learning Deep Improvement Representation to Accelerate Evolutionary Optimization

15 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Learning Improvement Representation, Accelerated Evolutionary Search, Large-Scale Multiobjective Optimization
Abstract: Evolutionary algorithms excel at versatile optimization for complex (e.g., multiobjective) problems but can be computationally expensive, especially in high-dimensional scenarios, and their stochastic nature of search may hinder swift convergence to global optima in promising directions. In this study, we train a multilayer perceptron (MLP) to learn the improvement representation of transitioning from poor-performing to better-performing solutions during evolutionary search, facilitating the rapid convergence of the evolutionary population towards global optimality along more promising paths. Then, through the iterative stacking of the previously trained lightweight MLP, a larger model can be constructed, enabling it to acquire deep improvement representations (DIR) for solutions. Conducting evolutionary search within the acquired DIR space significantly expedites the population's convergence rate. Finally, the efficacy of DIR-guided search is validated by applying it to the two prevailing evolutionary operators—simulated binary crossover and differential evolution. The experimental findings demonstrate its capability to achieve rapid convergence in solving challenging large-scale multiobjective optimization problems.
Supplementary Material: zip
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 115
Loading