ProtTeX-CC: Activating In-Context Learning in Protein LLM via Two-Stage Instruction Compression
By: Chuanliu Fan , Zicheng Ma , Jun Gao and more
Potential Business Impact:
Helps computers understand proteins better and faster.
Recent advances in protein large language models, such as ProtTeX, represent both side-chain amino acids and backbone structure as discrete token sequences of residue length. While this design enables unified modeling of multimodal protein information, it suffers from two major limitations: (1) The concatenation of sequence and structure tokens approximately doubles the protein length and breaks the intrinsic residue-level alignment between modalities. (2) Constrained by the training corpus and limited context window, ProtTeX is typically trained on single-protein inputs, rendering it incompatible with in-context learning (ICL) and thus limiting its generalization capability. To address these issues, we propose ProtTeX-CC, a lightweight two-stage compression framework designed to enhance ProtTeX under few-shot settings. We first design a joint embedding compression mechanism that fuses sequence and structure representations at the residue level, effectively reducing the protein input length by half without sacrificing performance. Then we propose a self-compression module that aggregates each full demonstration into the latent space of the last few linguistic tokens, reducing the average demonstration length from 751 tokens to less than 16 tokens. Compared to the original ProtTeX, our self-compression approach achieves a compression ratio of approximately 93.68% in the total prompt length under the 16-shot setting. Without modifying the backbone model, ProtTeX-CC introduces only a small number of additional parameters through PEFT-based tuning in the joint embedding compression stage and a single trainable projection layer in the self-compression stage. Extensive experiments on protein function prediction show that ProtTeX-CC improves performance on the in-domain benchmark by 2%, and generalizes well to the out-of-domain dataset with a performance gain of 11%.
Similar Papers
ProtTeX: Structure-In-Context Reasoning and Editing of Proteins with Large Language Models
Biomolecules
Helps computers understand and design proteins by reading their shapes.
Protein as a Second Language for LLMs
Machine Learning (CS)
Helps computers understand how proteins work.
Learning to Compress: Unlocking the Potential of Large Language Models for Text Representation
Computation and Language
Makes computers understand writing better for searching.