A Study into Investigating Temporal Robustness of LLMs
By: Jonas Wallat , Abdelrahman Abdallah , Adam Jatowt and more
Potential Business Impact:
Helps computers understand time better for answers.
Large Language Models (LLMs) encapsulate a surprising amount of factual world knowledge. However, their performance on temporal questions and historical knowledge is limited because they often cannot understand temporal scope and orientation or neglect the temporal aspect altogether. In this study, we aim to measure precisely how robust LLMs are for question answering based on their ability to process temporal information and perform tasks requiring temporal reasoning and temporal factual knowledge. Specifically, we design eight time-sensitive robustness tests for factual information to check the sensitivity of six popular LLMs in the zero-shot setting. Overall, we find LLMs lacking temporal robustness, especially to temporal reformulations and the use of different granularities of temporal references. We show how a selection of these eight tests can be used automatically to judge a model's temporal robustness for user questions on the fly. Finally, we apply the findings of this study to improve the temporal QA performance by up to 55 percent.
Similar Papers
Question Answering under Temporal Conflict: Evaluating and Organizing Evolving Knowledge with LLMs
Computation and Language
Helps computers remember and use new facts.
On the Temporal Question-Answering Capabilities of Large Language Models Over Anonymized Data
Computation and Language
Helps computers understand time in new information.
Temporal Referential Consistency: Do LLMs Favor Sequences Over Absolute Time References?
Computation and Language
Helps AI remember facts correctly over time.