Circuits, Features, and Heuristics in Molecular Transformers
By: Kristof Varadi, Mark Marosi, Peter Antal
Potential Business Impact:
Teaches computers to invent new, safe medicines.
Transformers generate valid and diverse chemical structures, but little is known about the mechanisms that enable these models to capture the rules of molecular representation. We present a mechanistic analysis of autoregressive transformers trained on drug-like small molecules to reveal the computational structure underlying their capabilities across multiple levels of abstraction. We identify computational patterns consistent with low-level syntactic parsing and more abstract chemical validity constraints. Using sparse autoencoders (SAEs), we extract feature dictionaries associated with chemically relevant activation patterns. We validate our findings on downstream tasks and find that mechanistic insights can translate to predictive performance in various practical settings.
Similar Papers
Unveiling Latent Knowledge in Chemistry Language Models through Sparse Autoencoders
Machine Learning (CS)
Unlocks AI's hidden chemical knowledge for faster discoveries.
Teaching Language Models Mechanistic Explainability Through Arrow-Pushing
Machine Learning (CS)
Teaches computers to predict how chemicals change.
Mechanistic Interpretability for Transformer-based Time Series Classification
Machine Learning (CS)
Shows how AI learns to predict patterns.