Abstract: Evolutionary multitasking (EMT) is an emerging approach for solving multitask optimization problems (MTOPs) and has garnered considerable research interest. The implicit EMT is a significant research branch that utilizes evolution operators to enable knowledge transfer (KT) between tasks. However, current approaches in implicit EMT face challenges in adaptability, due to the limited use of different evolution operators with different parameter settings and insufficient utilization of evolutionary states for performing KT. This results in suboptimal exploitation of implicit KT’s potential to tackle a variety of MTOPs. To overcome these limitations, we propose a novel learning-to-transfer (L2T) framework to automatically discover efficient KT policies for the MTOPs at hand. Our framework conceptualizes the KT process as a learning agent’s sequence of strategic decisions within the EMT process. We propose an action formulation for deciding when and how to transfer, a state representation with informative features of evolution states, a reward formulation concerning convergence and transfer efficiency gain, and the environment for the agent to interact with MTOPs. We employ an actor-critic network structure for the agent and learn the policy via proximal policy optimization. This learned agent can be integrated with various evolutionary algorithms, enhancing their ability to address unseen MTOPs. Comprehensive empirical studies on both synthetic and real-world MTOPs, encompassing diverse intertask relationships, function classes, and task distributions are conducted to validate the proposed L2T framework. The results show a marked improvement in the adaptability and performance of implicit EMT when solving a wide spectrum of unseen MTOPs.
External IDs:dblp:journals/tcyb/WuHWFZT25
Loading