WildSci: Advancing Scientific Reasoning from In-the-Wild Literature
By: Tengxiao Liu , Deepak Nathani , Zekun Li and more
Potential Business Impact:
Teaches computers to answer hard science questions.
Recent progress in large language model (LLM) reasoning has focused on domains like mathematics and coding, where abundant high-quality data and objective evaluation metrics are readily available. In contrast, progress in LLM reasoning models remains limited in scientific domains such as medicine and materials science due to limited dataset coverage and the inherent complexity of open-ended scientific questions. To address these challenges, we introduce WildSci, a new dataset of domain-specific science questions automatically synthesized from peer-reviewed literature, covering 9 scientific disciplines and 26 subdomains. By framing complex scientific reasoning tasks in a multiple-choice format, we enable scalable training with well-defined reward signals. We further apply reinforcement learning to finetune models on these data and analyze the resulting training dynamics, including domain-specific performance changes, response behaviors, and generalization trends. Experiments on a suite of scientific benchmarks demonstrate the effectiveness of our dataset and approach. We release WildSci to enable scalable and sustainable research in scientific reasoning, available at https://huggingface.co/datasets/JustinTX/WildSci.
Similar Papers
NaturalReasoning: Reasoning in the Wild with 2.8M Challenging Questions
Computation and Language
Teaches computers to think about many subjects.
MegaScience: Pushing the Frontiers of Post-Training Datasets for Science Reasoning
Computation and Language
Teaches AI to think like scientists.
NaturalReasoning: Reasoning in the Wild with 2.8M Challenging Questions
Computation and Language
Creates millions of tricky questions for smarter AI.