Score: 0

Efficiently Executing High-throughput Lightweight LLM Inference Applications on Heterogeneous Opportunistic GPU Clusters with Pervasive Context Management

Published: October 15, 2025 | arXiv ID: 2510.14024v1

By: Thanh Son Phung, Douglas Thain

Potential Business Impact:

Makes AI discover science faster, saving time and money.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

The rise of Generative AI introduces a new class of HPC workloads that integrates lightweight LLMs with traditional high-throughput applications to accelerate scientific discovery. The current design of HPC clusters is inadequate to support this new class however, either incurring long wait times on static batch queues or repeatedly paying expensive LLM startup costs upon resource preemption. To circumvent both the long queues and high startup costs, we propose to "decouple" the LLM initialization context from the actual LLM inferences, and retain the context in GPUs until it is no longer needed, a technique we term "Pervasive Context Management". We transform a fact verification application to enable this technique, allowing it to reduce its execution time by 72.1% (from 3 hours to 48 minutes) using the same amount of GPUs, and scale opportunistically on 32.8% of all GPUs in the cluster and further reduce the execution time to 13 minutes.

Country of Origin
🇺🇸 United States

Page Count
10 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing