ANALOGXPERT: AUTOMATING ANALOG TOPOLOGY SYNTHESIS BY INCORPORATING CIRCUIT DESIGN EXPERTISE INTO LARGE LANGUAGE MODELS
Keywords: Analog circuit design, subcircuit library, proofreading, CoT, in-context learning
TL;DR: AnalogXpert is a novel LLM prompting framework to generate the analog circuit topology automatically , which is incorporated analog design expertise.
Abstract: Analog circuits are crucial in modern electronic systems, and automating their design
has attracted significant research interest. One of major challenges is topology
synthesis, which determines circuit components and their connections. Recent
studies explore large language models (LLM) for topology synthesis. However,
the scenarios addressed by these studies do not align well with practical applications.
Specifically, existing work uses vague design requirements as input and outputs
an ideal model, but detailed structural requirements and device-level models
are more practical. Moreover, current approaches either formulate topology synthesis
as graph generation or Python code generation, whereas practical topology
design is a complex process that demands extensive design knowledge. In this
work, we propose AnalogXpert, a LLM-based agent aiming at solving practical
topology synthesis problem by incorporating circuit design expertise into LLMs.
First, we represent analog topology as SPICE code and introduce a subcircuit library
to reduce the design space, in the same manner as experienced designers.
Second, we decompose the problem into two sub-task (i.e., block selection and
block connection) through the use of CoT and in-context learning techniques, to
mimic the practical design process. Third, we introduce a proofreading strategy
that allows LLMs to incrementally correct the errors in the initial design, akin to
human designers who iteratively check and adjust the initial topology design to
ensure accuracy. Finally, we construct a high-quality benchmark containing both
real data (30) and synthetic data (2k). AnalogXpert achieves 40% and 23% success
rates on the synthetic dataset and real dataset respectively, which is markedly
better than those of GPT-4o (3% on both the synthetic dataset and the real dataset).
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7119
Loading