Score: 1

WildSci: Advancing Scientific Reasoning from In-the-Wild Literature

Published: January 9, 2026 | arXiv ID: 2601.05567v1

By: Tengxiao Liu , Deepak Nathani , Zekun Li and more

Potential Business Impact:

Teaches computers to answer hard science questions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent progress in large language model (LLM) reasoning has focused on domains like mathematics and coding, where abundant high-quality data and objective evaluation metrics are readily available. In contrast, progress in LLM reasoning models remains limited in scientific domains such as medicine and materials science due to limited dataset coverage and the inherent complexity of open-ended scientific questions. To address these challenges, we introduce WildSci, a new dataset of domain-specific science questions automatically synthesized from peer-reviewed literature, covering 9 scientific disciplines and 26 subdomains. By framing complex scientific reasoning tasks in a multiple-choice format, we enable scalable training with well-defined reward signals. We further apply reinforcement learning to finetune models on these data and analyze the resulting training dynamics, including domain-specific performance changes, response behaviors, and generalization trends. Experiments on a suite of scientific benchmarks demonstrate the effectiveness of our dataset and approach. We release WildSci to enable scalable and sustainable research in scientific reasoning, available at https://huggingface.co/datasets/JustinTX/WildSci.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
22 pages

Category
Computer Science:
Artificial Intelligence