Deep Recommender Models Inference: Automatic Asymmetric Data Flow Optimization
By: Giuseppe Ruggeri , Renzo Andri , Daniele Jahier Pagliari and more
Potential Business Impact:
Makes AI recommend things much faster.
Deep Recommender Models (DLRMs) inference is a fundamental AI workload accounting for more than 79% of the total AI workload in Meta's data centers. DLRMs' performance bottleneck is found in the embedding layers, which perform many random memory accesses to retrieve small embedding vectors from tables of various sizes. We propose the design of tailored data flows to speedup embedding look-ups. Namely, we propose four strategies to look up an embedding table effectively on one core, and a framework to automatically map the tables asymmetrically to the multiple cores of a SoC. We assess the effectiveness of our method using the Huawei Ascend AI accelerators, comparing it with the default Ascend compiler, and we perform high-level comparisons with Nvidia A100. Results show a speed-up varying from 1.5x up to 6.5x for real workload distributions, and more than 20x for extremely unbalanced distributions. Furthermore, the method proves to be much more independent of the query distribution than the baseline.
Similar Papers
Near-Zero-Overhead Freshness for Recommendation Systems via Inference-Side Model Updates
Distributed, Parallel, and Cluster Computing
Keeps online suggestions fresh and accurate.
Near-Zero-Overhead Freshness for Recommendation Systems via Inference-Side Model Updates
Distributed, Parallel, and Cluster Computing
Keeps online suggestions fresh and better.
Two-dimensional Sparse Parallelism for Large Scale Deep Learning Recommendation Model Training
Distributed, Parallel, and Cluster Computing
Trains big AI models much faster on many computers.