LongDA: Benchmarking LLM Agents for Long-Document Data Analysis
By: Yiyang Li , Zheyuan Zhang , Tianyi Ma and more
Potential Business Impact:
Helps computers understand complex instructions and data.
We introduce LongDA, a data analysis benchmark for evaluating LLM-based agents under documentation-intensive analytical workflows. In contrast to existing benchmarks that assume well-specified schemas and inputs, LongDA targets real-world settings in which navigating long documentation and complex data is the primary bottleneck. To this end, we manually curate raw data files, long and heterogeneous documentation, and expert-written publications from 17 publicly available U.S. national surveys, from which we extract 505 analytical queries grounded in real analytical practice. Solving these queries requires agents to first retrieve and integrate key information from multiple unstructured documents, before performing multi-step computations and writing executable code, which remains challenging for existing data analysis agents. To support the systematic evaluation under this setting, we develop LongTA, a tool-augmented agent framework that enables document access, retrieval, and code execution, and evaluate a range of proprietary and open-source models. Our experiments reveal substantial performance gaps even among state-of-the-art models, highlighting the challenges researchers should consider before applying LLM agents for decision support in real-world, high-stakes analytical settings.
Similar Papers
Unstructured Data Analysis using LLMs: A Comprehensive Benchmark
Databases
Tests how well computers find info in messy text.
IDA-Bench: Evaluating LLMs on Interactive Guided Data Analysis
Computation and Language
Tests computers on tricky, step-by-step data problems.
LLM/Agent-as-Data-Analyst: A Survey
Artificial Intelligence
Computers understand and analyze any kind of data.