Score: 1

FedChip: Federated LLM for Artificial Intelligence Accelerator Chip Design

Published: July 23, 2025 | arXiv ID: 2508.13162v1

By: Mahmoud Nazzal , Khoa Nguyen , Deepak Vungarala and more

Potential Business Impact:

Helps AI design computer chips faster, privately.

AI hardware design is advancing rapidly, driven by the promise of design automation to make chip development faster, more efficient, and more accessible to a wide range of users. Amongst automation tools, Large Language Models (LLMs) offer a promising solution by automating and streamlining parts of the design process. However, their potential is hindered by data privacy concerns and the lack of domain-specific training. To address this, we introduce FedChip, a Federated fine-tuning approach that enables multiple Chip design parties to collaboratively enhance a shared LLM dedicated for automated hardware design generation while protecting proprietary data. FedChip enables parties to train the model on proprietary local data and improve the shared LLM's performance. To exemplify FedChip's deployment, we create and release APTPU-Gen, a dataset of 30k design variations spanning various performance metric values such as power, performance, and area (PPA). To encourage the LLM to generate designs that achieve a balance across multiple quality metrics, we propose a new design evaluation metric, Chip@k, which statistically evaluates the quality of generated designs against predefined acceptance criteria. Experimental results show that FedChip improves design quality by more than 77% over high-end LLMs while maintaining data privacy

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
7 pages

Category
Computer Science:
Hardware Architecture