Faster Distributed Inference-Only Recommender Systems via Bounded Lag Synchronous Collectives
By: Kiril Dichev, Filip Pawlowski, Albert-Jan Yzelman
Recommender systems are enablers of personalized content delivery, and therefore revenue, for many large companies. In the last decade, deep learning recommender models (DLRMs) are the de-facto standard in this field. The main bottleneck in DLRM inference is the lookup of sparse features across huge embedding tables, which are usually partitioned across the aggregate RAM of many nodes. In state-of-the-art recommender systems, the distributed lookup is implemented via irregular all-to-all (alltoallv) communication, and often presents the main bottleneck. Today, most related work sees this operation as a given; in addition, every collective is synchronous in nature. In this work, we propose a novel bounded lag synchronous (BLS) version of the alltoallv operation. The bound can be a parameter allowing slower processes to lag behind entire iterations before the fastest processes block. In special applications such as inference-only DLRM, the accuracy of the application is fully preserved. We implement BLS alltoallv in a new PyTorch Distributed backend and evaluate it with a BLS version of the reference DLRM code. We show that for well balanced, homogeneous-access DLRM runs our BLS technique does not offer notable advantages. But for unbalanced runs, e.g. runs with strongly irregular embedding table accesses or with delays across different processes, our BLS technique improves both the latency and throughput of inference-only DLRM. In the best-case scenario, the proposed reduced synchronisation can mask the delays across processes altogether.
Similar Papers
LLM Inference Beyond a Single Node: From Bottlenecks to Mitigations with Fast All-Reduce Communication
Distributed, Parallel, and Cluster Computing
Makes giant AI models run much faster.
Near-Zero-Overhead Freshness for Recommendation Systems via Inference-Side Model Updates
Distributed, Parallel, and Cluster Computing
Keeps online suggestions fresh and accurate.
Near-Zero-Overhead Freshness for Recommendation Systems via Inference-Side Model Updates
Distributed, Parallel, and Cluster Computing
Keeps online suggestions fresh and better.