TALENT: Table VQA via Augmented Language-Enhanced Natural-text Transcription
By: Guo Yutong , Wanying Wang , Yue Wu and more
Potential Business Impact:
Helps computers answer questions from tables.
Table Visual Question Answering (Table VQA) is typically addressed by large vision-language models (VLMs). While such models can answer directly from images, they often miss fine-grained details unless scaled to very large sizes, which are computationally prohibitive, especially for mobile deployment. A lighter alternative is to have a small VLM perform OCR and then use a large language model (LLM) to reason over structured outputs such as Markdown tables. However, these representations are not naturally optimized for LLMs and still introduce substantial errors. We propose TALENT (Table VQA via Augmented Language-Enhanced Natural-text Transcription), a lightweight framework that leverages dual representations of tables. TALENT prompts a small VLM to produce both OCR text and natural language narration, then combines them with the question for reasoning by an LLM. This reframes Table VQA as an LLM-centric multimodal reasoning task, where the VLM serves as a perception-narration module rather than a monolithic solver. Additionally, we construct ReTabVQA, a more challenging Table VQA dataset requiring multi-step quantitative reasoning over table images. Experiments show that TALENT enables a small VLM-LLM combination to match or surpass a single large VLM at significantly lower computational cost on both public datasets and ReTabVQA.
Similar Papers
MTabVQA: Evaluating Multi-Tabular Reasoning of Language Models in Visual Space
CV and Pattern Recognition
Helps computers understand charts in pictures.
Agentic LLMs for Question Answering over Tabular Data
Computation and Language
Answers questions from complex tables using smart computer language.
M3TQA: Massively Multilingual Multitask Table Question Answering
Computation and Language
Helps computers understand tables in many languages.