End-to-End Test-Time Training for Long Context
By: Arnuv Tandon , Karan Dalal , Xinhao Li and more
Potential Business Impact:
Lets computers remember long stories by learning as they read.
We formulate long-context language modeling as a problem in continual learning rather than architecture design. Under this formulation, we only use a standard architecture -- a Transformer with sliding-window attention. However, our model continues learning at test time via next-token prediction on the given context, compressing the context it reads into its weights. In addition, we improve the model's initialization for learning at test time via meta-learning at training time. Overall, our method, a form of Test-Time Training (TTT), is End-to-End (E2E) both at test time (via next-token prediction) and training time (via meta-learning), in contrast to previous forms. We conduct extensive experiments with a focus on scaling properties. In particular, for 3B models trained with 164B tokens, our method (TTT-E2E) scales with context length in the same way as Transformer with full attention, while others, such as Mamba 2 and Gated DeltaNet, do not. However, similar to RNNs, TTT-E2E has constant inference latency regardless of context length, making it 2.7 times faster than full attention for 128K context. Our code is publicly available.
Similar Papers
ETT: Expanding the Long Context Understanding Capability of LLMs at Test-Time
Computation and Language
Lets computers understand much longer stories.
Test-Time Training Done Right
Machine Learning (CS)
Lets computers remember more for better results.
Let's (not) just put things in Context: Test-Time Training for Long-Context LLMs
Machine Learning (CS)
Helps computers remember and use more information.