MDToC: Metacognitive Dynamic Tree of Concepts for Boosting Mathematical Problem-Solving of Large Language Models
By: Tung Duong Ta, Tim Oates
Potential Business Impact:
Helps computers do math problems correctly.
Despite advances in mathematical reasoning capabilities, Large Language Models (LLMs) still struggle with calculation verification when using established prompting techniques. We present MDToC (Metacognitive Dynamic Tree of Concepts), a three-phase approach that constructs a concept tree, develops accuracy-verified calculations for each concept, and employs majority voting to evaluate competing solutions. Evaluations across CHAMP, MATH, and Game-of-24 benchmarks demonstrate our MDToC's effectiveness, with GPT-4-Turbo achieving 58.1\% on CHAMP, 86.6\% on MATH, and 85\% on Game-of-24 - outperforming GoT by 5\%, 5.4\%, and 4\% on all these tasks, respectively, without hand-engineered hints. MDToC consistently surpasses existing prompting methods across all backbone models, yielding improvements of up to 7.6\% over ToT and 6.2\% over GoT, establishing metacognitive calculation verification as a promising direction for enhanced mathematical reasoning.
Similar Papers
Structured Reasoning with Tree-of-Thoughts for Bengali Math Word Problems
Computation and Language
Lets computers solve math problems better.
ToC: Tree-of-Claims Search with Multi-Agent Language Models
Machine Learning (CS)
Helps lawyers write better patent claims faster.
MetaLadder: Ascending Mathematical Solution Quality via Analogical-Problem Reasoning Transfer
Computation and Language
Helps computers solve math problems like humans.