Finding Answers in Thought Matters: Revisiting Evaluation on Large Language Models with Reasoning
By: Hwiyeol Jo , Joosung Lee , Jaehone Lee and more
Potential Business Impact:
Makes AI math answers more trustworthy.
Evaluating generative models, such as large language models (LLMs), commonly involves question-answering tasks where the final answer is selected based on probability of answer choices. On the other hand, for models requiring reasoning, the method of answer extraction plays a critical role. Our research reveals that the performance of reasoning models and their final answer distributions are highly sensitive to the answer extraction algorithm employed. In order to mitigate this, we propose a basic framework: Answer Regeneration. The method uses an additional model inference, providing the prior input and output prefaced by the prompt "Answer:". The final answer is then selected or extracted from the regenerated output. We show that this extraction-rule-agnostic approach exhibits improved performance and enhanced robustness. Furthermore, we have applied this framework to general math problems and open-ended question answering tasks. Our analysis and this framework could offer a more reliable results for model evaluation.
Similar Papers
Method-Based Reasoning for Large Language Models: Extraction, Reuse, and Continuous Improvement
Computational Engineering, Finance, and Science
Teaches computers to solve new problems logically.
Method-Based Reasoning for Large Language Models: Extraction, Reuse, and Continuous Improvement
Computational Engineering, Finance, and Science
Teaches computers to solve new problems logically.
RAVR: Reference-Answer-guided Variational Reasoning for Large Language Models
Artificial Intelligence
Helps computers learn to solve harder problems.