ProofSketch: Efficient Verified Reasoning for Large Language Models
By: Disha Sheshanarayana, Tanishka Magar
Potential Business Impact:
Makes AI think smarter, faster, and cheaper.
Reasoning methods such as chain-of-thought prompting and self-consistency have shown immense potential to improve the accuracy of large language models across various reasoning tasks. However such methods involve generation of lengthy reasoning chains, which substantially increases token consumption, computational cost, and latency. To address this inefficiency, we propose ProofSketch, a verification-guided reasoning framework that integrates symbolic closure computation, lexicographic verification and adaptive sketch generation. Our experiments show that ProofSketch consistently reduces token usage while improving accuracy, demonstrating that this approach offers a promising path for efficient and trustworthy reasoning.
Similar Papers
Sketch-of-Thought: Efficient LLM Reasoning with Adaptive Cognitive-Inspired Sketching
Computation and Language
Makes smart computers think faster, using fewer words.
SketchThinker-R1: Towards Efficient Sketch-Style Reasoning in Large Multimodal Models
CV and Pattern Recognition
Makes AI think faster and cheaper.
Efficient Reasoning Models: A Survey
Computation and Language
Makes smart computers think faster and use less power.