Leveraging Large Language Models for enzymatic reaction prediction and characterization
By: Lorenzo Di Fruscia, Jana Marie Weber
Potential Business Impact:
Helps computers guess how tiny body machines work.
Predicting enzymatic reactions is crucial for applications in biocatalysis, metabolic engineering, and drug discovery, yet it remains a complex and resource-intensive task. Large Language Models (LLMs) have recently demonstrated remarkable success in various scientific domains, e.g., through their ability to generalize knowledge, reason over complex structures, and leverage in-context learning strategies. In this study, we systematically evaluate the capability of LLMs, particularly the Llama-3.1 family (8B and 70B), across three core biochemical tasks: Enzyme Commission number prediction, forward synthesis, and retrosynthesis. We compare single-task and multitask learning strategies, employing parameter-efficient fine-tuning via LoRA adapters. Additionally, we assess performance across different data regimes to explore their adaptability in low-data settings. Our results demonstrate that fine-tuned LLMs capture biochemical knowledge, with multitask learning enhancing forward- and retrosynthesis predictions by leveraging shared enzymatic information. We also identify key limitations, for example challenges in hierarchical EC classification schemes, highlighting areas for further improvement in LLM-driven biochemical modeling.
Similar Papers
Large Language Models Transform Organic Synthesis From Reaction Prediction to Automation
Artificial Intelligence
AI helps scientists invent new things faster.
Enhancing Chemical Reaction and Retrosynthesis Prediction with Large Language Model and Dual-task Learning
Machine Learning (CS)
Helps scientists invent new medicines faster.
Chemical reasoning in LLMs unlocks strategy-aware synthesis planning and reaction mechanism elucidation
Artificial Intelligence
Computers plan chemical reactions like expert scientists.