Score: 2

Hierarchical structure understanding in complex tables with VLLMs: a benchmark and experiments

Published: November 11, 2025 | arXiv ID: 2511.08298v1

By: Luca Bindini , Simone Giovannini , Simone Marinai and more

Potential Business Impact:

Computers can now understand complex science tables.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This work investigates the ability of Vision Large Language Models (VLLMs) to understand and interpret the structure of tables in scientific articles. Specifically, we explore whether VLLMs can infer the hierarchical structure of tables without additional processing. As a basis for our experiments we use the PubTables-1M dataset, a large-scale corpus of scientific tables. From this dataset, we extract a subset of tables that we introduce as Complex Hierarchical Tables (CHiTab): a benchmark collection of complex tables containing hierarchical headings. We adopt a series of prompt engineering strategies to probe the models' comprehension capabilities, experimenting with various prompt formats and writing styles. Multiple state-of-the-art open-weights VLLMs are evaluated on the benchmark first using their off-the-shelf versions and then fine-tuning some models on our task. We also measure the performance of humans to solve the task on a small set of tables comparing with performance of the evaluated VLLMs. The experiments support our intuition that generic VLLMs, not explicitly designed for understanding the structure of tables, can perform this task. This study provides insights into the potential and limitations of VLLMs to process complex tables and offers guidance for future work on integrating structured data understanding into general-purpose VLLMs.

Country of Origin
🇮🇹 Italy

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
Computation and Language