Score: 1

Can Large Language Models Robustly Perform Natural Language Inference for Japanese Comparatives?

Published: September 17, 2025 | arXiv ID: 2509.13695v1

By: Yosuke Mikami, Daiki Matsuoka, Hitomi Yanaka

Potential Business Impact:

Helps computers understand comparisons in Japanese.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) perform remarkably well in Natural Language Inference (NLI). However, NLI involving numerical and logical expressions remains challenging. Comparatives are a key linguistic phenomenon related to such inference, but the robustness of LLMs in handling them, especially in languages that are not dominant in the models' training data, such as Japanese, has not been sufficiently explored. To address this gap, we construct a Japanese NLI dataset that focuses on comparatives and evaluate various LLMs in zero-shot and few-shot settings. Our results show that the performance of the models is sensitive to the prompt formats in the zero-shot setting and influenced by the gold labels in the few-shot examples. The LLMs also struggle to handle linguistic phenomena unique to Japanese. Furthermore, we observe that prompts containing logical semantic representations help the models predict the correct labels for inference problems that they struggle to solve even with few-shot examples.

Country of Origin
🇯🇵 Japan

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
Computation and Language