Score: 0

Dissecting Embedding Bag Performance in DLRM Inference

Published: December 5, 2025 | arXiv ID: 2512.05831v1

By: Chandrish Ambati, Jing Ding, Trung Diep

As the size of DLRMs gets larger, the models must be partitioned across multiple GPUs or nodes of GPUs due to the size limitation of total HBM memory that can be packaged in a GPU. This partitioning adds communication and synchronization overhead of sending and receiving data across GPUs. We use the NCCL and NVSHMEM libraries to measure the performance of an Embedding Bag kernel implemented on H100 GPUs. We compare its performance across diOerent batch sizes, number of tables, table sizes, pooling factors, and embedding dimensions. For a large embedding table that spans multiple GPUs, we project the performance slowdown from distributing an embedding table across multiple GPUs.

Category
Computer Science:
Performance