MiST: Understanding the Role of Mid-Stage Scientific Training in Developing Chemical Reasoning Models
By: Andres M Bran , Tong Xie , Shai Pranesh and more
Large Language Models can develop reasoning capabilities through online fine-tuning with rule-based rewards. However, recent studies reveal a critical constraint: reinforcement learning succeeds only when the base model already assigns non-negligible probability to correct answers -- a property we term 'latent solvability'. This work investigates the emergence of chemical reasoning capabilities and what these prerequisites mean for chemistry. We identify two necessary conditions for RL-based chemical reasoning: 1) Symbolic competence, and 2) Latent chemical knowledge. We propose mid-stage scientific training (MiST): a set of mid-stage training techniques to satisfy these, including data-mixing with SMILES/CIF-aware pre-processing, continued pre-training on 2.9B tokens, and supervised fine-tuning on 1B tokens. These steps raise the latent-solvability score on 3B and 7B models by up to 1.8x, and enable RL to lift top-1 accuracy from 10.9 to 63.9% on organic reaction naming, and from 40.6 to 67.4% on inorganic material generation. Similar results are observed for other challenging chemical tasks, while producing interpretable reasoning traces. Our results define clear prerequisites for chemical reasoning training and highlight the broader role of mid-stage training in unlocking reasoning capabilities.
Similar Papers
Assessing the Chemical Intelligence of Large Language Models
Machine Learning (CS)
Computers can now solve hard chemistry problems.
Chem-R: Learning to Reason as a Chemist
Computational Engineering, Finance, and Science
Helps computers discover new chemicals faster.
MolReasoner: Toward Effective and Interpretable Reasoning for Molecular LLMs
Machine Learning (CS)
Teaches computers to understand and reason about molecules.