Score: 1

LongDA: Benchmarking LLM Agents for Long-Document Data Analysis

Published: January 5, 2026 | arXiv ID: 2601.02598v1

By: Yiyang Li , Zheyuan Zhang , Tianyi Ma and more

Potential Business Impact:

Helps computers understand complex instructions and data.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We introduce LongDA, a data analysis benchmark for evaluating LLM-based agents under documentation-intensive analytical workflows. In contrast to existing benchmarks that assume well-specified schemas and inputs, LongDA targets real-world settings in which navigating long documentation and complex data is the primary bottleneck. To this end, we manually curate raw data files, long and heterogeneous documentation, and expert-written publications from 17 publicly available U.S. national surveys, from which we extract 505 analytical queries grounded in real analytical practice. Solving these queries requires agents to first retrieve and integrate key information from multiple unstructured documents, before performing multi-step computations and writing executable code, which remains challenging for existing data analysis agents. To support the systematic evaluation under this setting, we develop LongTA, a tool-augmented agent framework that enables document access, retrieval, and code execution, and evaluate a range of proprietary and open-source models. Our experiments reveal substantial performance gaps even among state-of-the-art models, highlighting the challenges researchers should consider before applying LLM agents for decision support in real-world, high-stakes analytical settings.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
26 pages

Category
Computer Science:
Digital Libraries