LLMs for Analog Circuit Design Continuum (ACDC)
By: Yasaman Esfandiari , Jocelyn Rego , Austin Meyer and more
Potential Business Impact:
Helps computers design circuits, but they make mistakes.
Large Language Models (LLMs) and transformer architectures have shown impressive reasoning and generation capabilities across diverse natural language tasks. However, their reliability and robustness in real-world engineering domains remain largely unexplored, limiting their practical utility in human-centric workflows. In this work, we investigate the applicability and consistency of LLMs for analog circuit design -- a task requiring domain-specific reasoning, adherence to physical constraints, and structured representations -- focusing on AI-assisted design where humans remain in the loop. We study how different data representations influence model behavior and compare smaller models (e.g., T5, GPT-2) with larger foundation models (e.g., Mistral-7B, GPT-oss-20B) under varying training conditions. Our results highlight key reliability challenges, including sensitivity to data format, instability in generated designs, and limited generalization to unseen circuit configurations. These findings provide early evidence on the limits and potential of LLMs as tools to enhance human capabilities in complex engineering tasks, offering insights into designing reliable, deployable foundation models for structured, real-world applications.
Similar Papers
Reasoning Models Reason Well, Until They Don't
Artificial Intelligence
Makes smart computers better at solving hard problems.
LLMs4All: A Review on Large Language Models for Research and Applications in Academic Disciplines
Computation and Language
AI helps study many school subjects better.
Large Language Models (LLMs) for Electronic Design Automation (EDA)
Systems and Control
AI helps build computer chips faster and better.