A Survey of Inductive Reasoning for Large Language Models
By: Kedi Chen , Dezhao Ruan , Yuhao Dan and more
Potential Business Impact:
Helps computers learn like people by guessing.
Reasoning is an important task for large language models (LLMs). Among all the reasoning paradigms, inductive reasoning is one of the fundamental types, which is characterized by its particular-to-general thinking process and the non-uniqueness of its answers. The inductive mode is crucial for knowledge generalization and aligns better with human cognition, so it is a fundamental mode of learning, hence attracting increasing interest. Despite the importance of inductive reasoning, there is no systematic summary of it. Therefore, this paper presents the first comprehensive survey of inductive reasoning for LLMs. First, methods for improving inductive reasoning are categorized into three main areas: post-training, test-time scaling, and data augmentation. Then, current benchmarks of inductive reasoning are summarized, and a unified sandbox-based evaluation approach with the observation coverage metric is derived. Finally, we offer some analyses regarding the source of inductive ability and how simple model architectures and data help with inductive tasks, providing a solid foundation for future research.
Similar Papers
Implicit Reasoning in Large Language Models: A Comprehensive Survey
Computation and Language
Lets computers think faster without showing steps.
From Efficiency to Adaptivity: A Deeper Look at Adaptive Reasoning in Large Language Models
Artificial Intelligence
Computers change how they think based on how hard a problem is.
A Survey on Large Language Models for Mathematical Reasoning
Artificial Intelligence
Helps computers solve math problems like a person.