On LLM-Based Scientific Inductive Reasoning Beyond Equations
By: Brian S. Lin , Jiaxin Yuan , Zihan Zhou and more
Potential Business Impact:
Helps computers learn new science rules from few examples.
As large language models (LLMs) increasingly exhibit human-like capabilities, a fundamental question emerges: How can we enable LLMs to learn the underlying patterns from limited examples in entirely novel environments and apply them effectively? This question is central to the ability of LLMs in inductive reasoning. Existing research on LLM-based inductive reasoning can be broadly categorized based on whether the underlying rules are expressible via explicit mathematical equations. However, many recent studies in the beyond-equations category have emphasized rule design without grounding them in specific scenarios. Inspired by the parallels between inductive reasoning and human scientific discovery, we propose the task of LLM-Based Scientific Inductive Reasoning Beyond Equations and introduce a new benchmark, SIRBench-V1, to evaluate the inductive reasoning abilities of LLMs in scientific settings. Our experimental results show that current LLMs still struggle with this task, underscoring its difficulty and the need for further advancement in this area.
Similar Papers
LLM-SRBench: A New Benchmark for Scientific Equation Discovery with Large Language Models
Computation and Language
Helps computers find new science rules.
A Survey on Large Language Models for Mathematical Reasoning
Artificial Intelligence
Helps computers solve math problems like a person.
Brains vs. Bytes: Evaluating LLM Proficiency in Olympiad Mathematics
Artificial Intelligence
Computers can't truly do hard math problems.