Score: 1

RUST-BENCH: Benchmarking LLM Reasoning on Unstructured Text within Structured Tables

Published: November 6, 2025 | arXiv ID: 2511.04491v1

By: Nikhil Abhyankar , Purvi Chaurasia , Sanchit Kabra and more

Potential Business Impact:

Tests computers on messy, real-world data.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Existing tabular reasoning benchmarks mostly test models on small, uniform tables, underrepresenting the complexity of real-world data and giving an incomplete view of Large Language Models' (LLMs) reasoning abilities. Real tables are long, heterogeneous, and domain-specific, mixing structured fields with free text and requiring multi-hop reasoning across thousands of tokens. To address this gap, we introduce RUST-BENCH, a benchmark of 7966 questions from 2031 real-world tables spanning two domains: i) RB-Science (NSF grant records) and ii) RB-Sports (NBA statistics). Unlike prior work, RUST-BENCH evaluates LLMs jointly across scale, heterogeneity, domain specificity, and reasoning complexity. Experiments with open-source and proprietary models show that LLMs struggle with heterogeneous schemas and complex multi-hop inference, revealing persistent weaknesses in current architectures and prompting strategies. RUST-BENCH establishes a challenging new testbed for advancing tabular reasoning research.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
26 pages

Category
Computer Science:
Computation and Language