A Comprehensive Taxonomy of Negation for NLP and Neural Retrievers
By: Roxana Petcu , Samarth Bhargav , Maarten de Rijke and more
Potential Business Impact:
Helps computers understand "not" in questions.
Understanding and solving complex reasoning tasks is vital for addressing the information needs of a user. Although dense neural models learn contextualised embeddings, they still underperform on queries containing negation. To understand this phenomenon, we study negation in both traditional neural information retrieval and LLM-based models. We (1) introduce a taxonomy of negation that derives from philosophical, linguistic, and logical definitions; (2) generate two benchmark datasets that can be used to evaluate the performance of neural information retrieval models and to fine-tune models for a more robust performance on negation; and (3) propose a logic-based classification mechanism that can be used to analyze the performance of retrieval models on existing datasets. Our taxonomy produces a balanced data distribution over negation types, providing a better training setup that leads to faster convergence on the NevIR dataset. Moreover, we propose a classification schema that reveals the coverage of negation types in existing datasets, offering insights into the factors that might affect the generalization of fine-tuned models on negation.
Similar Papers
A Comprehensive Taxonomy of Negation for NLP and Neural Retrievers
Computation and Language
Helps computers understand "not" in questions.
Reproducing NevIR: Negation in Neural Information Retrieval
Information Retrieval
Makes computers understand "not" in searches better.
From No to Know: Taxonomy, Challenges, and Opportunities for Negation Understanding in Multimodal Foundation Models
Computation and Language
Helps computers understand "no" in any language.