Evaluating Latent Knowledge of Public Tabular Datasets in Large Language Models
By: Matteo Silvestri , Flavio Giorgi , Fabrizio Silvestri and more
Potential Business Impact:
Computers might be cheating on tests.
Large Language Models (LLMs) are increasingly evaluated on their ability to reason over structured data, yet such assessments often overlook a crucial confound: dataset contamination. In this work, we investigate whether LLMs exhibit prior knowledge of widely used tabular benchmarks such as Adult Income, Titanic, and others. Through a series of controlled probing experiments, we reveal that contamination effects emerge exclusively for datasets containing strong semantic cues-for instance, meaningful column names or interpretable value categories. In contrast, when such cues are removed or randomized, performance sharply declines to near-random levels. These findings suggest that LLMs' apparent competence on tabular reasoning tasks may, in part, reflect memorization of publicly available datasets rather than genuine generalization. We discuss implications for evaluation protocols and propose strategies to disentangle semantic leakage from authentic reasoning ability in future LLM assessments.
Similar Papers
How well do LLMs reason over tabular data, really?
Artificial Intelligence
Computers struggle with messy real-world data tables.
A Note on Statistically Accurate Tabular Data Generation Using Large Language Models
Machine Learning (CS)
Makes fake computer data more like real data.
Utilizing Training Data to Improve LLM Reasoning for Tabular Understanding
Machine Learning (CS)
Helps computers understand data tables better.