Thunder-NUBench: A Benchmark for LLMs' Sentence-Level Negation Understanding
By: Yeonkyoung So , Gyuseong Lee , Sungmok Jung and more
Potential Business Impact:
Helps computers understand "not" in sentences better.
Negation is a fundamental linguistic phenomenon that poses persistent challenges for Large Language Models (LLMs), particularly in tasks requiring deep semantic understanding. Existing benchmarks often treat negation as a side case within broader tasks like natural language inference, resulting in a lack of benchmarks that exclusively target negation understanding. In this work, we introduce Thunder-NUBench, a novel benchmark explicitly designed to assess sentence-level negation understanding in LLMs. Thunder-NUBench goes beyond surface-level cue detection by contrasting standard negation with structurally diverse alternatives such as local negation, contradiction, and paraphrase. The benchmark consists of manually curated sentence-negation pairs and a multiple-choice dataset that enables in-depth evaluation of models' negation understanding.
Similar Papers
Thunder-KoNUBench: A Corpus-Aligned Benchmark for Korean Negation Understanding
Computation and Language
Improves AI's understanding of Korean "not" words.
Vision-Language Models Do Not Understand Negation
CV and Pattern Recognition
Teaches computers to understand "not" in pictures.
A Comprehensive Taxonomy of Negation for NLP and Neural Retrievers
Computation and Language
Helps computers understand "not" in questions.