Automated Dynamic AI Inference Scaling on HPC-Infrastructure: Integrating Kubernetes, Slurm and vLLM
By: Tim Trappen , Robert Keßler , Roland Pabel and more
Potential Business Impact:
Makes supercomputers run AI faster for many people.
Due to rising demands for Artificial Inteligence (AI) inference, especially in higher education, novel solutions utilising existing infrastructure are emerging. The utilisation of High-Performance Computing (HPC) has become a prevalent approach for the implementation of such solutions. However, the classical operating model of HPC does not adapt well to the requirements of synchronous, user-facing dynamic AI application workloads. In this paper, we propose our solution that serves LLMs by integrating vLLM, Slurm and Kubernetes on the supercomputer \textit{RAMSES}. The initial benchmark indicates that the proposed architecture scales efficiently for 100, 500 and 1000 concurrent requests, incurring only an overhead of approximately 500 ms in terms of end-to-end latency.
Similar Papers
Scalable Engine and the Performance of Different LLM Models in a SLURM based HPC architecture
Distributed, Parallel, and Cluster Computing
Makes smart computer programs run much faster.
Experience Deploying Containerized GenAI Services at an HPC Center
Distributed, Parallel, and Cluster Computing
Makes supercomputers run AI programs easily.
AIvailable: A Software-Defined Architecture for LLM-as-a-Service on Heterogeneous and Legacy GPUs
Distributed, Parallel, and Cluster Computing
Lets old computers run smart AI programs.