PPipe: Efficient Video Analytics Serving on Heterogeneous GPU Clusters via Pool-Based Pipeline Parallelism
By: Z. Jonny Kong, Qiang Xu, Y. Charlie Hu
Potential Business Impact:
Makes slow computers work faster with smart grouping.
With the rapid innovation of GPUs, heterogeneous GPU clusters in both public clouds and on-premise data centers have become increasingly commonplace. In this paper, we demonstrate how pipeline parallelism, a technique wellstudied for throughput-oriented deep learning model training, can be used effectively for serving latency-bound model inference, e.g., in video analytics systems, on heterogeneous GPU clusters. Our work exploits the synergy between diversity in model layers and diversity in GPU architectures, which results in comparable inference latency for many layers when running on low-class and high-class GPUs. We explore how such overlooked capability of low-class GPUs can be exploited using pipeline parallelism and present a novel inference serving system, PPipe, that employs pool-based pipeline parallelism via an MILP-based control plane and a data plane that performs resource reservation-based adaptive batching. Evaluation results on diverse workloads (18 CNN models) show that PPipe achieves 41.1% - 65.5% higher utilization of low-class GPUs while maintaining high utilization of high-class GPUs, leading to 32.2% - 75.1% higher serving throughput compared to various baselines.
Similar Papers
Optimizing video analytics inference pipelines: a case study
Distributed, Parallel, and Cluster Computing
Makes farm animal cameras watch faster.
DawnPiper: A Memory-scablable Pipeline Parallel Training Framework
Distributed, Parallel, and Cluster Computing
Trains bigger computer brains with less memory.
PIPO: Pipelined Offloading for Efficient Inference on Consumer Devices
Distributed, Parallel, and Cluster Computing
Makes big computer brains run on normal laptops.