LLM-Symbolic Integration for Robust Temporal Tabular Reasoning
By: Atharv Kulkarni , Kushagra Dixit , Vivek Srikumar and more
Potential Business Impact:
Helps computers answer questions from tables better.
Temporal tabular question answering presents a significant challenge for Large Language Models (LLMs), requiring robust reasoning over structured data, which is a task where traditional prompting methods often fall short. These methods face challenges such as memorization, sensitivity to table size, and reduced performance on complex queries. To overcome these limitations, we introduce TempTabQA-C, a synthetic dataset designed for systematic and controlled evaluations, alongside a symbolic intermediate representation that transforms tables into database schemas. This structured approach allows LLMs to generate and execute SQL queries, enhancing generalization and mitigating biases. By incorporating adaptive few-shot prompting with contextually tailored examples, our method achieves superior robustness, scalability, and performance. Experimental results consistently highlight improvements across key challenges, setting a new benchmark for robust temporal reasoning with LLMs.
Similar Papers
TransientTables: Evaluating LLMs' Reasoning on Temporally Evolving Semi-structured Tables
Computation and Language
Helps computers understand how things change over time.
Agentic LLMs for Question Answering over Tabular Data
Computation and Language
Answers questions from complex tables using smart computer language.
Table as a Modality for Large Language Models
Computation and Language
Helps computers understand charts and tables better.