Bridging Molecular Graphs and Large Language Models
By: Runze Wang, Mingqi Yang, Yanming Shen
Potential Business Impact:
Lets computers understand chemical structures like words.
While Large Language Models (LLMs) have shown exceptional generalization capabilities, their ability to process graph data, such as molecular structures, remains limited. To bridge this gap, this paper proposes Graph2Token, an efficient solution that aligns graph tokens to LLM tokens. The key idea is to represent a graph token with the LLM token vocabulary, without fine-tuning the LLM backbone. To achieve this goal, we first construct a molecule-text paired dataset from multisources, including CHEBI and HMDB, to train a graph structure encoder, which reduces the distance between graphs and texts representations in the feature space. Then, we propose a novel alignment strategy that associates a graph token with LLM tokens. To further unleash the potential of LLMs, we collect molecular IUPAC name identifiers, which are incorporated into the LLM prompts. By aligning molecular graphs as special tokens, we can activate LLM generalization ability to molecular few-shot learning. Extensive experiments on molecular classification and regression tasks demonstrate the effectiveness of our proposed Graph2Token.
Similar Papers
GraphT5: Unified Molecular Graph-Language Modeling via Multi-Modal Cross-Token Attention
Machine Learning (CS)
Helps computers understand molecules better for new drugs.
Actions Speak Louder than Prompts: A Large-Scale Study of LLMs for Graph Inference
Computation and Language
Computers learn from connected information better.
Bridging Code Graphs and Large Language Models for Better Code Understanding
Computation and Language
Helps computers understand computer code better.