Systematic Evaluation of Optimization Techniques for Long-Context Language Models
By: Ammar Ahmed , Sheng Di , Franck Cappello and more
Potential Business Impact:
Speeds up AI thinking without losing smarts.
Large language models (LLMs) excel across diverse natural language processing tasks but face resource demands and limited context windows. Although techniques like pruning, quantization, and token dropping can mitigate these issues, their efficacy in long-context scenarios and system evaluation remains underexplored. This paper systematically benchmarks these optimizations, characterizing memory usage, latency, and throughput, and studies how these methods impact the quality of text generation. We first analyze individual optimization methods for two LLM architectures supporting long context and then systematically evaluate combinations of these techniques to assess how this deeper analysis impacts performance metrics. We subsequently study the scalability of individual optimization methods on a larger variant with 70 billion-parameter model. Our novel insights reveal that naive combination inference optimization algorithms can adversely affect larger models due to compounded approximation errors, as compared to their smaller counterparts. Experiments show that relying solely on F1 obscures these effects by hiding precision-recall trade-offs in question answering tasks. By integrating system-level profiling with task-specific insights, this study helps LLM practitioners and researchers explore and balance efficiency, accuracy, and scalability across tasks and hardware configurations.
Similar Papers
Let's (not) just put things in Context: Test-Time Training for Long-Context LLMs
Machine Learning (CS)
Helps computers remember and use more information.
A Survey on Transformer Context Extension: Approaches and Evaluation
Computation and Language
Helps computers understand long stories better.
Sentence-Anchored Gist Compression for Long-Context LLMs
Computation and Language
Makes computers understand longer stories with less effort.