OmniInfer: System-Wide Acceleration Techniques for Optimizing LLM Serving Throughput and Latency
By: Jun Wang , Yunxiang Yao , Wenwei Kuang and more
Potential Business Impact:
Makes AI answer questions much faster.
Large Language Models drive a wide range of modern AI applications but impose substantial challenges on large-scale serving systems due to intensive computation, strict latency constraints, and throughput bottlenecks. We introduce OmniInfer, a unified system-level acceleration framework designed to maximize end-to-end serving efficiency through fine-grained optimization of expert placement, cache compression, and scheduling. OmniInfer integrates three complementary components: OmniPlacement for load-aware Mixture-of-Experts scheduling, OmniAttn for sparse attention acceleration, and OmniProxy for disaggregation-aware request scheduling. Built atop vLLM, OmniInfer delivers system-wide performance gains through adaptive resource disaggregation, efficient sparsity exploitation, and global coordination across prefill and decode phases. Evaluated on DeepSeek-R1 within a 10-node Ascend 910C cluster, OmniInfer achieves 616 QPM, where the unified framework reduces TPOT by 36\%, and the superimposition of OmniProxy further slashes TTFT by 38\%. The project is open-sourced at [this https URL](https://gitee.com/omniai/omniinfer).
Similar Papers
Towards Efficient Agents: A Co-Design of Inference Architecture and System
Computation and Language
Makes AI agents think and act much faster.
OmniLearn: A Framework for Distributed Deep Learning over Heterogeneous Clusters
Machine Learning (CS)
Makes computer learning faster on mixed-up machines.
High-Throughput LLM inference on Heterogeneous Clusters
Distributed, Parallel, and Cluster Computing
Makes AI answer questions much faster on different computers.