Score: 3

End-to-End Test-Time Training for Long Context

Published: December 29, 2025 | arXiv ID: 2512.23675v1

By: Arnuv Tandon , Karan Dalal , Xinhao Li and more

BigTech Affiliations: Stanford University University of California, Berkeley

Potential Business Impact:

Lets computers remember long stories by learning as they read.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We formulate long-context language modeling as a problem in continual learning rather than architecture design. Under this formulation, we only use a standard architecture -- a Transformer with sliding-window attention. However, our model continues learning at test time via next-token prediction on the given context, compressing the context it reads into its weights. In addition, we improve the model's initialization for learning at test time via meta-learning at training time. Overall, our method, a form of Test-Time Training (TTT), is End-to-End (E2E) both at test time (via next-token prediction) and training time (via meta-learning), in contrast to previous forms. We conduct extensive experiments with a focus on scaling properties. In particular, for 3B models trained with 164B tokens, our method (TTT-E2E) scales with context length in the same way as Transformer with full attention, while others, such as Mamba 2 and Gated DeltaNet, do not. However, similar to RNNs, TTT-E2E has constant inference latency regardless of context length, making it 2.7 times faster than full attention for 128K context. Our code is publicly available.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
29 pages

Category
Computer Science:
Machine Learning (CS)