TComQA: Extracting Temporal Commonsense from Text
By: Lekshmi R Nair, Arun Sankar, Koninika Pal
Potential Business Impact:
Helps computers understand how long things take.
Understanding events necessitates grasping their temporal context, which is often not explicitly stated in natural language. For example, it is not a trivial task for a machine to infer that a museum tour may last for a few hours, but can not take months. Recent studies indicate that even advanced large language models (LLMs) struggle in generating text that require reasoning with temporal commonsense due to its infrequent explicit mention in text. Therefore, automatically mining temporal commonsense for events enables the creation of robust language models. In this work, we investigate the capacity of LLMs to extract temporal commonsense from text and evaluate multiple experimental setups to assess their effectiveness. Here, we propose a temporal commonsense extraction pipeline that leverages LLMs to automatically mine temporal commonsense and use it to construct TComQA, a dataset derived from SAMSum and RealNews corpora. TComQA has been validated through crowdsourcing and achieves over 80\% precision in extracting temporal commonsense. The model trained with TComQA also outperforms an LLM fine-tuned on existing dataset of temporal question answering task.
Similar Papers
HistoryBankQA: Multilingual Temporal Question Answering on Historical Events
Computation and Language
Helps computers understand history across many languages.
Towards Temporal Knowledge-Base Creation for Fine-Grained Opinion Analysis with Language Models
Computation and Language
Helps computers track opinions over time.
Question Answering under Temporal Conflict: Evaluating and Organizing Evolving Knowledge with LLMs
Computation and Language
Helps computers remember and use new facts.