Score: 2

Utilizing Training Data to Improve LLM Reasoning for Tabular Understanding

Published: August 26, 2025 | arXiv ID: 2508.18676v1

By: Chufan Gao, Jintai Chen, Jimeng Sun

Potential Business Impact:

Helps computers understand data tables better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Automated tabular understanding and reasoning are essential tasks for data scientists. Recently, Large language models (LLMs) have become increasingly prevalent in tabular reasoning tasks. Previous work focuses on (1) finetuning LLMs using labeled data or (2) Training-free prompting LLM agents using chain-of-thought (CoT). Finetuning offers dataset-specific learning at the cost of generalizability. Training-free prompting is highly generalizable but does not take full advantage of training data. In this paper, we propose a novel prompting-based reasoning approach, Learn then Retrieve: LRTab, which integrates the benefits of both by retrieving relevant information learned from training data. We first use prompting to obtain CoT responses over the training data. For incorrect CoTs, we prompt the LLM to predict Prompt Conditions to avoid the error, learning insights from the data. We validate the effectiveness of Prompt Conditions using validation data. Finally, at inference time, we retrieve the most relevant Prompt Conditions for additional context for table understanding. We provide comprehensive experiments on WikiTQ and Tabfact, showing that LRTab is interpretable, cost-efficient, and can outperform previous baselines in tabular reasoning.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)