Score: 4

IDA-Bench: Evaluating LLMs on Interactive Guided Data Analysis

Published: May 23, 2025 | arXiv ID: 2505.18223v2

By: Hanyu Li , Haoyu Liu , Tingyu Zhu and more

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Tests computers on tricky, step-by-step data problems.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) show promise as data analysis agents, but existing benchmarks overlook the iterative nature of the field, where experts' decisions evolve with deeper insights of the dataset. To address this, we introduce IDA-Bench, a novel benchmark evaluating LLM agents in multi-round interactive scenarios. Derived from complex Kaggle notebooks, tasks are presented as sequential natural language instructions by an LLM-simulated user. Agent performance is judged by comparing its final numerical output to the human-derived baseline. Initial results show that even state-of-the-art coding agents (like Claude-3.7-thinking) succeed on < 50% of the tasks, highlighting limitations not evident in single-turn tests. This work underscores the need to improve LLMs' multi-round capabilities for building more reliable data analysis agents, highlighting the necessity of achieving a balance between instruction following and reasoning.

Country of Origin
πŸ‡¨πŸ‡³ πŸ‡ΊπŸ‡Έ United States, China

Repos / Data Links

Page Count
51 pages

Category
Computer Science:
Computation and Language