Hierarchical structure understanding in complex tables with VLLMs: a benchmark and experiments
By: Luca Bindini , Simone Giovannini , Simone Marinai and more
Potential Business Impact:
Computers can now understand complex science tables.
This work investigates the ability of Vision Large Language Models (VLLMs) to understand and interpret the structure of tables in scientific articles. Specifically, we explore whether VLLMs can infer the hierarchical structure of tables without additional processing. As a basis for our experiments we use the PubTables-1M dataset, a large-scale corpus of scientific tables. From this dataset, we extract a subset of tables that we introduce as Complex Hierarchical Tables (CHiTab): a benchmark collection of complex tables containing hierarchical headings. We adopt a series of prompt engineering strategies to probe the models' comprehension capabilities, experimenting with various prompt formats and writing styles. Multiple state-of-the-art open-weights VLLMs are evaluated on the benchmark first using their off-the-shelf versions and then fine-tuning some models on our task. We also measure the performance of humans to solve the task on a small set of tables comparing with performance of the evaluated VLLMs. The experiments support our intuition that generic VLLMs, not explicitly designed for understanding the structure of tables, can perform this task. This study provides insights into the potential and limitations of VLLMs to process complex tables and offers guidance for future work on integrating structured data understanding into general-purpose VLLMs.
Similar Papers
Tabular Data Understanding with LLMs: A Survey of Recent Advances and Challenges
Computation and Language
Helps computers understand all kinds of tables.
Table as a Modality for Large Language Models
Computation and Language
Helps computers understand charts and tables better.
Information Extraction From Fiscal Documents Using LLMs
Computation and Language
Lets computers understand government money reports.