RUST-BENCH: Benchmarking LLM Reasoning on Unstructured Text within Structured Tables
By: Nikhil Abhyankar , Purvi Chaurasia , Sanchit Kabra and more
Potential Business Impact:
Tests computers on messy, real-world data.
Existing tabular reasoning benchmarks mostly test models on small, uniform tables, underrepresenting the complexity of real-world data and giving an incomplete view of Large Language Models' (LLMs) reasoning abilities. Real tables are long, heterogeneous, and domain-specific, mixing structured fields with free text and requiring multi-hop reasoning across thousands of tokens. To address this gap, we introduce RUST-BENCH, a benchmark of 7966 questions from 2031 real-world tables spanning two domains: i) RB-Science (NSF grant records) and ii) RB-Sports (NBA statistics). Unlike prior work, RUST-BENCH evaluates LLMs jointly across scale, heterogeneity, domain specificity, and reasoning complexity. Experiments with open-source and proprietary models show that LLMs struggle with heterogeneous schemas and complex multi-hop inference, revealing persistent weaknesses in current architectures and prompting strategies. RUST-BENCH establishes a challenging new testbed for advancing tabular reasoning research.
Similar Papers
T2R-bench: A Benchmark for Generating Article-Level Reports from Real World Industrial Tables
Computation and Language
Helps computers turn messy tables into clear reports.
T2R-bench: A Benchmark for Generating Article-Level Reports from Real World Industrial Tables
Computation and Language
Helps computers turn messy data into clear reports.
RiddleBench: A New Generative Reasoning Benchmark for LLMs
Computation and Language
Tests AI's smart thinking, finds it struggles.