Reasoning Planning for Language Models
By: Bao Nguyen , Hieu Trung Nguyen , Ruifeng She and more
Potential Business Impact:
Helps computers pick the best way to solve math problems.
Selecting an appropriate reasoning method for a given query remains a key challenge in language model generation. Existing approaches typically generate multiple candidate responses and use an aggregation strategy to select the output answer, often assuming that more candidate answers yield higher accuracy. We revisit this assumption through a rigorous theoretical analysis, deriving accuracy bounds for standard aggregation methods under fixed generation distributions and candidate sizes. Building on these insights, we introduce EPIC, an Ensemble Planning with Contrastive learning framework to learn a shared representation space that captures both model reasoning abilities and query-method compatibility. EPIC incorporates our probability bounds as a regularizer in a utility-driven optimization that balances accuracy and computational cost. Experiments on diverse mathematical reasoning tasks show that EPIC consistently selects optimal reasoning methods, improving accuracy while reducing computational overhead. Our code can be found at https://github.com/nguyenngocbaocmt02/EPIC.
Similar Papers
EpiCaR: Knowing What You Don't Know Matters for Better Reasoning in LLMs
Computation and Language
Teaches AI to know when it's right.
Entropy-Aligned Decoding of LMs for Better Writing and Reasoning
Machine Learning (CS)
Makes AI write better stories and answers.
Learning to Reason in LLMs by Expectation Maximization
Machine Learning (CS)
Helps computers think step-by-step to solve problems.