Algorithmic Thinking Theory
By: MohammadHossein Bateni , Vincent Cohen-Addad , Yuzhou Gu and more
Potential Business Impact:
Makes AI smarter by letting it check its own answers.
Large language models (LLMs) have proven to be highly effective for solving complex reasoning tasks. Surprisingly, their capabilities can often be improved by iterating on previously generated solutions. In this context, a reasoning plan for generating and combining a set of solutions can be thought of as an algorithm for reasoning using a probabilistic oracle. We introduce a theoretical framework for analyzing such reasoning algorithms. This framework formalizes the principles underlying popular techniques for iterative improvement and answer aggregation, providing a foundation for designing a new generation of more powerful reasoning methods. Unlike approaches for understanding models that rely on architectural specifics, our model is grounded in experimental evidence. As a result, it offers a general perspective that may extend to a wide range of current and future reasoning oracles.
Similar Papers
From Efficiency to Adaptivity: A Deeper Look at Adaptive Reasoning in Large Language Models
Artificial Intelligence
Computers change how they think based on how hard a problem is.
Thinking Machines: Mathematical Reasoning in the Age of LLMs
Artificial Intelligence
Helps computers prove math ideas like a scientist.
Universe of Thoughts: Enabling Creative Reasoning with Large Language Models
Artificial Intelligence
Helps computers invent new ideas, not just solve problems.