CORE: Collaborative Optimization with Reinforcement Learning and Evolutionary Algorithm for Floorplanning
Keywords: Floorplanning, Evolutionary Algorithms, Reinforcement Learning
Abstract: Floorplanning is the initial step in the physical design process of Electronic Design Automation (EDA), directly influencing subsequent placement, routing, and final power of the chip.
However, the solution space in floorplanning is vast, and current algorithms often struggle to explore it sufficiently, making them prone to getting trapped in local optima. To achieve efficient floorplanning, we propose **CORE**, a general and effective solution optimization framework that synergizes Evolutionary Algorithms (EAs) and Reinforcement Learning (RL) for high-quality layout search and optimization.
Specifically, we propose the Clustering-based Diversified Evolutionary Search that directly perturbs layouts and evolves them based on novelty and performance. Additionally, we model the floorplanning problem as a sequential decision problem with B*-Tree representation and employ RL for efficient learning.
To efficiently coordinate EAs and RL, we propose the reinforcement-driven mechanism and evolution-guided mechanism.
The former accelerates population evolution through RL, while the latter guides RL learning through EAs. The experimental results on the MCNC and GSRC benchmarks demonstrate that CORE outperforms other strong baselines in terms of wirelength and area utilization metrics, achieving a 12.9\% improvement in wirelength. CORE represents the first evolutionary reinforcement learning (ERL) algorithm for floorplanning, surpassing existing RL-based methods. The code is available at https://github.com/yeshenpy/CORE.
Primary Area: Reinforcement learning (e.g., decision and control, planning, hierarchical RL, robotics)
Submission Number: 27118
Loading