Keywords: Large Language Models (LLMs), Domain-specific adaptation, Spatial reasoning, Semiconductor layout design, Prompt Tuning, In-Context Learning, Neuro-inspired AI, Thought assessment, Adaptive AI systems
TL;DR: SOLOMON, a neuro-inspired LLM reasoning network, enhances foundation models' adaptability for domain-specific tasks, improving performance in semiconductor layout design through better spatial reasoning and knowledge application.
Abstract: This paper presents SOLOMON, a novel Neuro-inspired Large Language Model (LLM) Reasoning Network architecture that enhances the adaptability of foundation models for domain-specific applications. Through a case study in semiconductor layout design, we demonstrate how SOLOMON enables swift adaptation of general-purpose LLMs to specialized tasks by leveraging Prompt Tuning and In-Context Learning techniques. Our experiments reveal the challenges LLMs face in spatial reasoning and applying domain knowledge to practical problems. Results show that SOLOMON instances significantly outperform their baseline LLM counterparts and achieve performance comparable to state-of-the-art reasoning model, o1-preview. We discuss future research directions for developing more adaptive AI systems that can continually learn, adapt, and evolve in response to new information and changing requirements.
Submission Number: 131
Loading