Can LLM Reasoning Be Trusted? A Comparative Study: Using Human Benchmarking on Statistical Tasks
By: Crish Nagarkar, Leonid Bogachev, Serge Sharoff
Potential Business Impact:
Teaches computers to do math and grade homework.
This paper investigates the ability of large language models (LLMs) to solve statistical tasks, as well as their capacity to assess the quality of reasoning. While state-of-the-art LLMs have demonstrated remarkable performance in a range of NLP tasks, their competence in addressing even moderately complex statistical challenges is not well understood. We have fine-tuned selected open-source LLMs on a specially developed dataset to enhance their statistical reasoning capabilities, and compared their performance with the human scores used as a benchmark. Our results show that the fine-tuned models achieve better performance on advanced statistical tasks on the level comparable to a statistics student. Fine-tuning demonstrates architecture-dependent improvements, with some models showing significant performance gains, indicating clear potential for deployment in educational technology and statistical analysis assistance systems. We also show that LLMs themselves can be far better judges of the answers quality (including explanation and reasoning assessment) in comparison to traditional metrics, such as BLEU or BertScore. This self-evaluation capability enables scalable automated assessment for statistical education platforms and quality assurance in automated analysis tools. Potential applications also include validation tools for research methodology in academic and industry settings, and quality control mechanisms for data analysis workflows.
Similar Papers
Human-Level Reasoning: A Comparative Study of Large Language Models on Logical and Abstract Reasoning
Artificial Intelligence
Tests if AI can think like a person.
How Reliable are LLMs for Reasoning on the Re-ranking task?
Computation and Language
Shows how computers learn to explain their choices.
Brains vs. Bytes: Evaluating LLM Proficiency in Olympiad Mathematics
Artificial Intelligence
Computers can't truly do hard math problems.