Score: 0

Circuits, Features, and Heuristics in Molecular Transformers

Published: December 10, 2025 | arXiv ID: 2512.09757v1

By: Kristof Varadi, Mark Marosi, Peter Antal

Potential Business Impact:

Teaches computers to invent new, safe medicines.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Transformers generate valid and diverse chemical structures, but little is known about the mechanisms that enable these models to capture the rules of molecular representation. We present a mechanistic analysis of autoregressive transformers trained on drug-like small molecules to reveal the computational structure underlying their capabilities across multiple levels of abstraction. We identify computational patterns consistent with low-level syntactic parsing and more abstract chemical validity constraints. Using sparse autoencoders (SAEs), we extract feature dictionaries associated with chemically relevant activation patterns. We validate our findings on downstream tasks and find that mechanistic insights can translate to predictive performance in various practical settings.

Country of Origin
🇭🇺 Hungary

Page Count
30 pages

Category
Computer Science:
Machine Learning (CS)