Parallax: Efficient LLM Inference Service over Decentralized Environment
By: Chris Tong , Youhe Jiang , Gufeng Chen and more
Potential Business Impact:
Shares computer power to run AI faster.
Deploying a large language model (LLM) inference service remains costly because centralized serving depends on specialized GPU clusters and high-bandwidth interconnects in datacenters. An appealing alternative is to leverage collaborative decentralized GPU pools. However, heterogeneity in GPU and limited interconnected network bandwidth, along with potentially dynamic availability, make efficient scheduling the central challenge in this scenario. In this paper, we present Parallax, a decentralized LLM serving system that turns a pool of heterogeneous GPUs into an efficient inference platform via a two-phase scheduler. Parallax decomposes planning into (i) model allocation, which places layers of each replica across diverse GPUs to jointly optimize latency and throughput under memory and link-bandwidth constraints, and (ii) request-time GPU pipeline selection, which stitches layers from different replicas into end-to-end execution chains that balance load and adapt to current conditions. We implement Parallax and evaluate it on open-source LLMs deployed over real volunteer nodes. Parallax consistently reduces latency and increases throughput relative to decentralized baselines, demonstrating that principled scheduling can make volunteer compute a practical, affordable substrate for LLM inference. Github Repo at: https://github.com/GradientHQ/parallax.
Similar Papers
Parallax: Runtime Parallelization for Operator Fallbacks in Heterogeneous Edge Systems
Distributed, Parallel, and Cluster Computing
Makes phone apps run faster and use less power.
Taming the Chaos: Coordinated Autoscaling for Heterogeneous and Disaggregated LLM Inference
Distributed, Parallel, and Cluster Computing
Makes AI models run faster and cheaper.
Chameleon: Adaptive Caching and Scheduling for Many-Adapter LLM Inference Environments
Distributed, Parallel, and Cluster Computing
Makes AI answer questions faster and cheaper.