Score: 3

RADAR: Benchmarking Language Models on Imperfect Tabular Data

Published: June 9, 2025 | arXiv ID: 2506.08249v1

By: Ken Gu , Zhihan Zhang , Kate Lin and more

BigTech Affiliations: University of Washington

Potential Business Impact:

Helps computers understand messy data better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Language models (LMs) are increasingly being deployed to perform autonomous data analyses. However, their data awareness -- the ability to recognize, reason over, and appropriately handle data artifacts such as missing values, outliers, and logical inconsistencies -- remains underexplored. These artifacts are especially common in real-world tabular data and, if mishandled, can significantly compromise the validity of analytical conclusions. To address this gap, we present RADAR, a benchmark for systematically evaluating data-aware reasoning on tabular data. We develop a framework to simulate data artifacts via programmatic perturbations to enable targeted evaluation of model behavior. RADAR comprises 2980 table query pairs, grounded in real-world data spanning 9 domains and 5 data artifact types. In addition to evaluating artifact handling, RADAR systematically varies table size to study how reasoning performance holds when increasing table size. Our evaluation reveals that, despite decent performance on tables without data artifacts, frontier models degrade significantly when data artifacts are introduced, exposing critical gaps in their capacity for robust, data-aware analysis. Designed to be flexible and extensible, RADAR supports diverse perturbation types and controllable table sizes, offering a valuable resource for advancing tabular reasoning.

Country of Origin
πŸ‡ΊπŸ‡Έ United States


Page Count
60 pages

Category
Computer Science:
Databases