Score: 0

Learning to Reason in LLMs by Expectation Maximization

Published: December 23, 2025 | arXiv ID: 2512.20169v1

By: Junghyun Lee , Branislav Kveton , Sunav Choudhary and more

Potential Business Impact:

Helps computers think step-by-step to solve problems.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) solve reasoning problems by first generating a rationale and then answering. We formalize reasoning as a latent variable model and derive an expectation-maximization (EM) objective for learning to reason. This view connects EM and modern reward-based optimization, and shows that the main challenge lies in designing a sampling distribution that generates rationales that justify correct answers. We instantiate and compare several sampling schemes: rejection sampling with a budget, self-taught reasoner (STaR), and prompt posterior sampling (PPS), which only keeps the rationalization stage of STaR. Our experiments on the ARC, MMLU, and OpenBookQA datasets with the Llama and Qwen models show that the sampling scheme can significantly affect the accuracy of learned reasoning models. Despite its simplicity, we observe that PPS outperforms the other sampling schemes.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)