MV-CLAM: Multi-View Molecular Interpretation with Cross-Modal Projection via Language Model
By: Sumin Ha , Jun Hyeong Kim , Yinhua Piao and more
Potential Business Impact:
Helps computers understand chemicals better.
Human expertise in chemistry and biomedicine relies on contextual molecular understanding, a capability that large language models (LLMs) can extend through fine-grained alignment between molecular structures and text. Recent multimodal learning advances focus on cross-modal alignment, but existing molecule-text models ignore complementary information in different molecular views and rely on single-view representations, limiting molecular understanding. Moreover, na\"ive multi-view alignment strategies face two challenges: (1) separate aligned spaces with inconsistent mappings between molecule and text embeddings, and that (2) existing loss objectives fail to preserve complementary information for fine-grained alignment. This can limit the LLM's ability to fully understand the molecular properties. To address these issues, we propose MV-CLAM, a novel framework that aligns multi-view molecular representations into a unified textual space using a multi-query transformer (MQ-Former). Our approach ensures cross-view consistency while a token-level contrastive loss preserves diverse molecular features across textual queries. MV-CLAM enhances molecular reasoning, improving retrieval and captioning accuracy. The source code of MV-CLAM is available in https://github.com/sumin124/mv-clam.git.
Similar Papers
MV-MLM: Bridging Multi-View Mammography and Language for Breast Cancer Diagnosis and Risk Prediction
CV and Pattern Recognition
Helps doctors find breast cancer faster.
$\text{M}^{2}$LLM: Multi-view Molecular Representation Learning with Large Language Models
Machine Learning (CS)
Helps find new medicines by understanding molecules better.
mCLM: A Function-Infused and Synthesis-Friendly Modular Chemical Language Model
Artificial Intelligence
Finds better medicines by building with molecule parts.