IDA-Bench: Evaluating LLMs on Interactive Guided Data Analysis
By: Hanyu Li , Haoyu Liu , Tingyu Zhu and more
Potential Business Impact:
Tests computers on tricky, step-by-step data problems.
Large Language Models (LLMs) show promise as data analysis agents, but existing benchmarks overlook the iterative nature of the field, where experts' decisions evolve with deeper insights of the dataset. To address this, we introduce IDA-Bench, a novel benchmark evaluating LLM agents in multi-round interactive scenarios. Derived from complex Kaggle notebooks, tasks are presented as sequential natural language instructions by an LLM-simulated user. Agent performance is judged by comparing its final numerical output to the human-derived baseline. Initial results show that even state-of-the-art coding agents (like Claude-3.7-thinking) succeed on < 50% of the tasks, highlighting limitations not evident in single-turn tests. This work underscores the need to improve LLMs' multi-round capabilities for building more reliable data analysis agents, highlighting the necessity of achieving a balance between instruction following and reasoning.
Similar Papers
Understanding Large Language Models' Ability on Interdisciplinary Research
Computation and Language
Helps computers invent new science ideas.
DataSciBench: An LLM Agent Benchmark for Data Science
Computation and Language
Tests how well AI understands data science tasks.
DSBC : Data Science task Benchmarking with Context engineering
Artificial Intelligence
Tests smart computer helpers for data jobs.