Chain of Methodologies: Scaling Test Time Computation without Training
By: Cong Liu , Jie Wu , Weigang Wu and more
Potential Business Impact:
Teaches computers to think step-by-step like people.
Large Language Models (LLMs) often struggle with complex reasoning tasks due to insufficient in-depth insights in their training data, which are typically absent in publicly available documents. This paper introduces the Chain of Methodologies (CoM), an innovative and intuitive prompting framework that enhances structured thinking by integrating human methodological insights, enabling LLMs to tackle complex tasks with extended reasoning. CoM leverages the metacognitive abilities of advanced LLMs, activating systematic reasoning throught user-defined methodologies without explicit fine-tuning. Experiments show that CoM surpasses competitive baselines, demonstrating the potential of training-free prompting methods as robust solutions for complex reasoning tasks and bridging the gap toward human-level reasoning through human-like methodological insights.
Similar Papers
Scaling Test-time Compute for Low-resource Languages: Multilingual Reasoning in LLMs
Computation and Language
Helps computers reason in any language.
A Survey on Large Language Models for Mathematical Reasoning
Artificial Intelligence
Helps computers solve math problems like a person.
Reasoning Capabilities and Invariability of Large Language Models
Computation and Language
Tests if computers can think logically.