Score: 1

MoSKA: Mixture of Shared KV Attention for Efficient Long-Sequence LLM Inference

Published: November 8, 2025 | arXiv ID: 2511.06010v1

By: Myunghyun Rhee , Sookyung Choi , Euiseok Kim and more

BigTech Affiliations: SK Hynix

Potential Business Impact:

Makes AI understand more words much faster.

Business Areas:
Knowledge Management Administrative Services

The escalating context length in Large Language Models (LLMs) creates a severe performance bottleneck around the Key-Value (KV) cache, whose memory-bound nature leads to significant GPU under-utilization. This paper introduces Mixture of Shared KV Attention (MoSKA), an architecture that addresses this challenge by exploiting the heterogeneity of context data. It differentiates between per-request unique and massively reused shared sequences. The core of MoSKA is a novel Shared KV Attention mechanism that transforms the attention on shared data from a series of memory-bound GEMV operations into a single, compute-bound GEMM by batching concurrent requests. This is supported by an MoE-inspired sparse attention strategy that prunes the search space and a tailored Disaggregated Infrastructure that specializes hardware for unique and shared data. This comprehensive approach demonstrates a throughput increase of up to 538.7x over baselines in workloads with high context sharing, offering a clear architectural path toward scalable LLM inference.

Country of Origin
🇰🇷 South Korea

Page Count
4 pages

Category
Computer Science:
Machine Learning (CS)