Score: 1

Reflection Pretraining Enables Token-Level Self-Correction in Biological Sequence Models

Published: December 24, 2025 | arXiv ID: 2512.20954v1

By: Xiang Zhang , Jiaqi Wei , Yuejin Yang and more

Potential Business Impact:

Helps computers "think" to solve biology problems.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Chain-of-Thought (CoT) prompting has significantly advanced task-solving capabilities in natural language processing with large language models. Unlike standard prompting, CoT encourages the model to generate intermediate reasoning steps, non-answer tokens, that help guide the model toward more accurate final outputs. These intermediate steps enable more complex reasoning processes such as error correction, memory management, future planning, and self-reflection. However, applying CoT to non-natural language domains, such as protein and RNA language models, is not yet possible, primarily due to the limited expressiveness of their token spaces (e.g., amino acid tokens). In this work, we propose and define the concept of language expressiveness: the ability of a given language, using its tokens and grammar, to encode information. We show that the limited expressiveness of protein language severely restricts the applicability of CoT-style reasoning. To overcome this, we introduce reflection pretraining, for the first time in a biological sequence model, which enables the model to engage in intermediate reasoning through the generation of auxiliary "thinking tokens" beyond simple answer tokens. Theoretically, we demonstrate that our augmented token set significantly enhances biological language expressiveness, thereby improving the overall reasoning capacity of the model. Experimentally, our pretraining approach teaches protein models to self-correct and leads to substantial performance gains compared to standard pretraining.

Country of Origin
🇨🇳 🇨🇦 China, Canada

Page Count
23 pages

Category
Computer Science:
Computation and Language