Experience Deploying Containerized GenAI Services at an HPC Center
By: Angel M. Beltre, Jeff Ogden, Kevin Pedretti
Potential Business Impact:
Makes supercomputers run AI programs like ChatGPT.
Generative Artificial Intelligence (GenAI) applications are built from specialized components -- inference servers, object storage, vector and graph databases, and user interfaces -- interconnected via web-based APIs. While these components are often containerized and deployed in cloud environments, such capabilities are still emerging at High-Performance Computing (HPC) centers. In this paper, we share our experience deploying GenAI workloads within an established HPC center, discussing the integration of HPC and cloud computing environments. We describe our converged computing architecture that integrates HPC and Kubernetes platforms running containerized GenAI workloads, helping with reproducibility. A case study illustrates the deployment of the Llama Large Language Model (LLM) using a containerized inference server (vLLM) across both Kubernetes and HPC platforms using multiple container runtimes. Our experience highlights practical considerations and opportunities for the HPC container community, guiding future research and tool development.
Similar Papers
Experience Deploying Containerized GenAI Services at an HPC Center
Distributed, Parallel, and Cluster Computing
Makes supercomputers run AI programs easily.
Automated Dynamic AI Inference Scaling on HPC-Infrastructure: Integrating Kubernetes, Slurm and vLLM
Distributed, Parallel, and Cluster Computing
Makes supercomputers run AI faster for many people.
AI Factories: It's time to rethink the Cloud-HPC divide
Distributed, Parallel, and Cluster Computing
Supercomputers become easier for AI to use.