ChemATP: A Training-Free Chemical Reasoning Framework for Large Language Models
By: Mingxu Zhang , Dazhong Shen , Qi Zhang and more
Potential Business Impact:
Helps computers understand chemistry like a scientist.
Large Language Models (LLMs) exhibit strong general reasoning but struggle in molecular science due to the lack of explicit chemical priors in standard string representations. Current solutions face a fundamental dilemma. Training-based methods inject priors into parameters, but this static coupling hinders rapid knowledge updates and often compromises the model's general reasoning capabilities. Conversely, existing training-free methods avoid these issues but rely on surface-level prompting, failing to provide the fine-grained atom-level priors essential for precise chemical reasoning. To address this issue, we introduce ChemATP, a framework that decouples chemical knowledge from the reasoning engine. By constructing the first atom-level textual knowledge base, ChemATP enables frozen LLMs to explicitly retrieve and reason over this information dynamically. This architecture ensures interpretability and adaptability while preserving the LLM's intrinsic general intelligence. Experiments show that ChemATP significantly outperforms training-free baselines and rivals state-of-the-art training-based models, demonstrating that explicit prior injection is a competitive alternative to implicit parameter updates.
Similar Papers
Chem-R: Learning to Reason as a Chemist
Computational Engineering, Finance, and Science
Helps computers discover new chemicals faster.
ChemAgent: Self-updating Library in Large Language Models Improves Chemical Reasoning
Computation and Language
Helps computers solve tough science problems better.
Chem-R: Learning to Reason as a Chemist
Computational Engineering, Finance, and Science
Helps computers discover new medicines faster.