ETT: Expanding the Long Context Understanding Capability of LLMs at Test-Time
By: Kiarash Zahirnia , Zahra Golpayegani , Walid Ahmed and more
Potential Business Impact:
Lets computers understand much longer stories.
Transformer-based Language Models' computation and memory overhead increase quadratically as a function of sequence length. The quadratic cost poses challenges when employing LLMs for processing long sequences. In this work, we introduce \ourmodelacronym~(Extend at Test-Time), method for extending the context length of short context Transformer-based LLMs, with constant memory requirement and linear computation overhead. ETT enable the extension of the context length at test-time by efficient fine-tuning the model's parameters on the input context, chunked into overlapping small subsequences. We evaluate ETT on LongBench by extending the context length of GPT-Large and Phi-2 up to 32 times, increasing from 1k to 32k tokens. This results in up to a 30 percent improvement in the model's accuracy. We also study how context can be stored in LLM's weights effectively and efficiently. Through a detailed ablation study, we examine which Transformer modules are most beneficial to fine-tune at test-time. Interestingly, we find that fine-tuning the second layer of the FFNs is more effective than full fine-tuning, leading to a further improvement in the models' accuracy.
Similar Papers
End-to-End Test-Time Training for Long Context
Machine Learning (CS)
Lets computers remember long stories by learning as they read.
Let's (not) just put things in Context: Test-Time Training for Long-Context LLMs
Machine Learning (CS)
Helps computers remember and use more information.
Test-Time Training Done Right
Machine Learning (CS)
Lets computers remember more for better results.