MoSKA: Mixture of Shared KV Attention for Efficient Long-Sequence LLM Inference
By: Myunghyun Rhee , Sookyung Choi , Euiseok Kim and more
Potential Business Impact:
Makes AI understand more words much faster.
The escalating context length in Large Language Models (LLMs) creates a severe performance bottleneck around the Key-Value (KV) cache, whose memory-bound nature leads to significant GPU under-utilization. This paper introduces Mixture of Shared KV Attention (MoSKA), an architecture that addresses this challenge by exploiting the heterogeneity of context data. It differentiates between per-request unique and massively reused shared sequences. The core of MoSKA is a novel Shared KV Attention mechanism that transforms the attention on shared data from a series of memory-bound GEMV operations into a single, compute-bound GEMM by batching concurrent requests. This is supported by an MoE-inspired sparse attention strategy that prunes the search space and a tailored Disaggregated Infrastructure that specializes hardware for unique and shared data. This comprehensive approach demonstrates a throughput increase of up to 538.7x over baselines in workloads with high context sharing, offering a clear architectural path toward scalable LLM inference.
Similar Papers
Sparse Attention across Multiple-context KV Cache
Machine Learning (CS)
Makes AI understand long texts faster and cheaper.
Retrospective Sparse Attention for Efficient Long-Context Generation
Computation and Language
Fixes AI mistakes in long stories.
LLMs Know What to Drop: Self-Attention Guided KV Cache Eviction for Efficient Long-Context Inference
Computation and Language
Makes AI understand long texts faster and cheaper.