Math Natural Language Inference: this should be easy!
By: Valeria de Paiva , Qiyue Gao , Hai Hu and more
Potential Business Impact:
Computers now understand math text better.
We ask whether contemporary LLMs are able to perform natural language inference (NLI) tasks on mathematical texts. We call this the Math NLI problem. We construct a corpus of Math NLI pairs whose premises are from extant mathematical text and whose hypotheses and gold labels were provided by people with experience in both research-level mathematics and also in the NLI field. We also investigate the quality of corpora using the same premises but whose hypotheses are provided by LLMs themselves. We not only investigate the performance but also the inter-group consistency of the diverse group of LLMs. We have both positive and negative findings. Among our positive findings: in some settings, using a majority vote of LLMs is approximately equivalent to using human-labeled data in the Math NLI area. On the negative side: LLMs still struggle with mathematical language. They occasionally fail at even basic inferences. Current models are not as prone to hypothesis-only "inference" in our data the way the previous generation had been. In addition to our findings, we also provide our corpora as data to support future work on Math NLI.
Similar Papers
Reverse-engineering NLI: A study of the meta-inferential properties of Natural Language Inference
Computation and Language
Teaches computers to understand how sentences relate.
Can Large Language Models Robustly Perform Natural Language Inference for Japanese Comparatives?
Computation and Language
Helps computers understand comparisons in Japanese.
Filling the Gap: Is Commonsense Knowledge Generation useful for Natural Language Inference?
Computation and Language
Helps computers understand if one sentence follows another.