Score: 2

Prot2Text-V2: Protein Function Prediction with Multimodal Contrastive Alignment

Published: May 16, 2025 | arXiv ID: 2505.11194v1

By: Xiao Fei , Michail Chatzianastasis , Sarah Almeida Carneiro and more

Potential Business Impact:

Explains what tiny body parts do in plain English.

Business Areas:
Translation Service Professional Services

Predicting protein function from sequence is a central challenge in computational biology. While existing methods rely heavily on structured ontologies or similarity-based techniques, they often lack the flexibility to express structure-free functional descriptions and novel biological functions. In this work, we introduce Prot2Text-V2, a novel multimodal sequence-to-text model that generates free-form natural language descriptions of protein function directly from amino acid sequences. Our method combines a protein language model as a sequence encoder (ESM-3B) and a decoder-only language model (LLaMA-3.1-8B-Instruct) through a lightweight nonlinear modality projector. A key innovation is our Hybrid Sequence-level Contrastive Alignment Learning (H-SCALE), which improves cross-modal learning by matching mean- and std-pooled protein embeddings with text representations via contrastive loss. After the alignment phase, we apply instruction-based fine-tuning using LoRA on the decoder to teach the model how to generate accurate protein function descriptions conditioned on the protein sequence. We train Prot2Text-V2 on about 250K curated entries from SwissProt and evaluate it under low-homology conditions, where test sequences have low similarity with training samples. Prot2Text-V2 consistently outperforms traditional and LLM-based baselines across various metrics.

Country of Origin
🇫🇷 🇦🇪 United Arab Emirates, France

Repos / Data Links

Page Count
25 pages

Category
Computer Science:
Computational Engineering, Finance, and Science