See it to Place it: Evolving Macro Placements with Vision Language Models

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: macro placement, chip floorplanning, vision language models, reinforcement learning, in-context learning, evolutionary search, spatial reasoning
TL;DR: In-context learning for macro placement in computer chip design
Abstract: We propose using frontier Vision-Language Models (VLMs) for macro placement in chip floorplanning, a complex optimization task that has recently shown promising advancements through machine learning methods. For human designers, macro placement is an inherently visual process that relies on spatial reasoning to arrange components on the chip canvas. Because VLMs exhibit strong reasoning capabilities over visual inputs, we hypothesize that these models can effectively complement existing learning-based approaches. We introduce VeoPlace (Visual Evolutionary Optimization Placement), a novel framework that uses a VLM to guide the actions of a base policy by constraining them to subregions of the chip canvas. The VLM proposals are iteratively optimized through an evolutionary search strategy with respect to resulting placement quality. On open-source benchmarks, VeoPlace establishes a new state-of-the-art for learning-based methods, outperforming the strongest prior approach across all evaluated circuits by reducing wirelength by an average of 10.9% with peak improvements of over 20%. Our approach opens new possibilities for electronic design automation tools that leverage foundation models to solve complex physical design problems.
Primary Area: applications to robotics, autonomy, planning
Submission Number: 9386
Loading