Dissecting Embedding Bag Performance in DLRM Inference
By: Chandrish Ambati, Jing Ding, Trung Diep
As the size of DLRMs gets larger, the models must be partitioned across multiple GPUs or nodes of GPUs due to the size limitation of total HBM memory that can be packaged in a GPU. This partitioning adds communication and synchronization overhead of sending and receiving data across GPUs. We use the NCCL and NVSHMEM libraries to measure the performance of an Embedding Bag kernel implemented on H100 GPUs. We compare its performance across diOerent batch sizes, number of tables, table sizes, pooling factors, and embedding dimensions. For a large embedding table that spans multiple GPUs, we project the performance slowdown from distributing an embedding table across multiple GPUs.
Similar Papers
Two-dimensional Sparse Parallelism for Large Scale Deep Learning Recommendation Model Training
Distributed, Parallel, and Cluster Computing
Trains big AI models much faster on many computers.
Machine Learning-Guided Memory Optimization for DLRM Inference on Tiered Memory
Performance
Makes computer recommendations faster and cheaper.
Flexible Vector Integration in Embedded RISC-V SoCs for End to End CNN Inference Acceleration
Distributed, Parallel, and Cluster Computing
Makes smart devices run AI faster and use less power.