Pre-trained knowledge elevates large language models beyond traditional chemical reaction optimizers
By: Robert MacKnight , Jose Emilio Regio , Jeffrey G. Ethier and more
Potential Business Impact:
AI helps chemists find new materials faster.
Modern optimization in experimental chemistry employs algorithmic search through black-box parameter spaces. Here we demonstrate that pre-trained knowledge in large language models (LLMs) fundamentally changes this paradigm. Using six fully enumerated categorical reaction datasets (768 - 5,684 experiments), we benchmark LLM-guided optimization (LLM-GO) against Bayesian optimization (BO) and random sampling. Frontier LLMs consistently match or exceed BO performance across five single-objective datasets, with advantages growing as parameter complexity increases and high-performing conditions become scarce (<5% of space). BO retains superiority only for explicit multi-objective trade-offs. To understand these contrasting behaviors, we introduce a topology-agnostic information theory framework quantifying sampling diversity throughout optimization campaigns. This analysis reveals that LLMs maintain systematically higher exploration entropy than BO across all datasets while achieving superior performance, with advantages most pronounced in solution-scarce parameter spaces where high-entropy exploration typically fails - suggesting that pre-trained domain knowledge enables more effective navigation of chemical parameter space rather than replacing structured exploration strategies. To enable transparent benchmarking and community validation, we release Iron Mind (https://gomes.andrew.cmu.edu/iron-mind), a no-code platform for side-by-side evaluation of human, algorithmic, and LLM optimization campaigns with public leaderboards and complete trajectories. Our findings establish that LLM-GO excels precisely where traditional methods struggle: complex categorical spaces requiring domain understanding rather than mathematical optimization.
Similar Papers
Distilling and exploiting quantitative insights from Large Language Models for enhanced Bayesian optimization of chemical reactions
Machine Learning (CS)
Teaches computers to find better ways to make chemicals.
Large Language Models Transform Organic Synthesis From Reaction Prediction to Automation
Artificial Intelligence
AI helps scientists invent new things faster.
Large Scale Multi-Task Bayesian Optimization with Large Language Models
Machine Learning (CS)
AI learns from past jobs to do new ones better.