Score: 1

Automated Dynamic AI Inference Scaling on HPC-Infrastructure: Integrating Kubernetes, Slurm and vLLM

Published: November 26, 2025 | arXiv ID: 2511.21413v1

By: Tim Trappen , Robert Keßler , Roland Pabel and more

Potential Business Impact:

Makes supercomputers run AI faster for many people.

Business Areas:
Intelligent Systems Artificial Intelligence, Data and Analytics, Science and Engineering

Due to rising demands for Artificial Inteligence (AI) inference, especially in higher education, novel solutions utilising existing infrastructure are emerging. The utilisation of High-Performance Computing (HPC) has become a prevalent approach for the implementation of such solutions. However, the classical operating model of HPC does not adapt well to the requirements of synchronous, user-facing dynamic AI application workloads. In this paper, we propose our solution that serves LLMs by integrating vLLM, Slurm and Kubernetes on the supercomputer \textit{RAMSES}. The initial benchmark indicates that the proposed architecture scales efficiently for 100, 500 and 1000 concurrent requests, incurring only an overhead of approximately 500 ms in terms of end-to-end latency.

Country of Origin
🇩🇪 Germany

Repos / Data Links

Page Count
6 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing