QuArch: A Benchmark for Evaluating LLM Reasoning in Computer Architecture
By: Shvetank Prakash , Andrew Cheng , Arya Tschand and more
Potential Business Impact:
Tests AI's smarts about how computers work.
The field of computer architecture, which bridges high-level software abstractions and low-level hardware implementations, remains absent from current large language model (LLM) evaluations. To this end, we present QuArch (pronounced 'quark'), the first benchmark designed to facilitate the development and evaluation of LLM knowledge and reasoning capabilities specifically in computer architecture. QuArch provides a comprehensive collection of 2,671 expert-validated question-answer (QA) pairs covering various aspects of computer architecture, including processor design, memory systems, and interconnection networks. Our evaluation reveals that while frontier models possess domain-specific knowledge, they struggle with skills that require higher-order thinking in computer architecture. Frontier model accuracies vary widely (from 34% to 72%) on these advanced questions, highlighting persistent gaps in architectural reasoning across analysis, design, and implementation QAs. By holistically assessing fundamental skills, QuArch provides a foundation for building and measuring LLM capabilities that can accelerate innovation in computing systems. With over 140 contributors from 40 institutions, this benchmark represents a community effort to set the standard for architectural reasoning in LLM evaluation.
Similar Papers
ARCHE: A Novel Task to Evaluate LLMs on Latent Reasoning Chain Extraction
Artificial Intelligence
Teaches computers to break down science thinking.
ArchXBench: A Complex Digital Systems Benchmark Suite for LLM Driven RTL Synthesis
Hardware Architecture
AI designs complex computer chips automatically.
UQ: Assessing Language Models on Unsolved Questions
Computation and Language
Tests AI on hard, real-world questions.