Multidimensional Consistency Improves Reasoning in Language Models
By: Huiyuan Lai, Xiao Zhang, Malvina Nissim
Potential Business Impact:
Makes math AI more reliable by checking answers many ways.
While Large language models (LLMs) have proved able to address some complex reasoning tasks, we also know that they are highly sensitive to input variation, which can lead to different solution paths and final answers. Answer consistency across input variations can thus be taken as a sign of stronger confidence. Leveraging this insight, we introduce a framework, {\em Multidimensional Reasoning Consistency} where, focusing on math problems, models are systematically pushed to diversify solution paths towards a final answer, thereby testing them for answer consistency across multiple input variations. We induce variations in (i) order of shots in prompt, (ii) problem phrasing, and (iii) languages used. Extensive experiments on a large range of open-source state-of-the-art LLMs of various sizes show that reasoning consistency differs by variation dimension, and that by aggregating consistency across dimensions, our framework consistently enhances mathematical reasoning performance on both monolingual dataset GSM8K and multilingual dataset MGSM, especially for smaller models.
Similar Papers
Exploring and Evaluating Multimodal Knowledge Reasoning Consistency of Multimodal Large Language Models
Computation and Language
Makes computers understand pictures and words better.
Enhancing Mathematical Reasoning in Large Language Models with Self-Consistency-Based Hallucination Detection
Artificial Intelligence
Makes AI better at math by checking its work.
A Survey on Large Language Models for Mathematical Reasoning
Artificial Intelligence
Helps computers solve math problems like a person.