Thunder-KoNUBench: A Corpus-Aligned Benchmark for Korean Negation Understanding
By: Sungmok Jung , Yeonkyoung So , Joonhak Lee and more
Potential Business Impact:
Improves AI's understanding of Korean "not" words.
Although negation is known to challenge large language models (LLMs), benchmarks for evaluating negation understanding, especially in Korean, are scarce. We conduct a corpus-based analysis of Korean negation and show that LLM performance degrades under negation. We then introduce Thunder-KoNUBench, a sentence-level benchmark that reflects the empirical distribution of Korean negation phenomena. Evaluating 47 LLMs, we analyze the effects of model size and instruction tuning, and show that fine-tuning on Thunder-KoNUBench improves negation understanding and broader contextual comprehension in Korean.
Similar Papers
Thunder-NUBench: A Benchmark for LLMs' Sentence-Level Negation Understanding
Computation and Language
Helps computers understand "not" in sentences better.
Nunchi-Bench: Benchmarking Language Models on Cultural Reasoning with a Focus on Korean Superstition
Computation and Language
Tests if AI understands different cultures' beliefs.
Korean Canonical Legal Benchmark: Toward Knowledge-Independent Evaluation of LLMs' Legal Reasoning Capabilities
Computation and Language
Tests if AI can understand and use laws.