Score: 1

Machine Learning-Guided Memory Optimization for DLRM Inference on Tiered Memory

Published: November 11, 2025 | arXiv ID: 2511.08568v1

By: Jie Ren , Bin Ma , Shuangyan Yang and more

BigTech Affiliations: Meta

Potential Business Impact:

Makes computer recommendations faster and cheaper.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Deep learning recommendation models (DLRMs) are widely used in industry, and their memory capacity requirements reach the terabyte scale. Tiered memory architectures provide a cost-effective solution but introduce challenges in embedding-vector placement due to complex embedding-access patterns. We propose RecMG, a machine learning (ML)-guided system for vector caching and prefetching on tiered memory. RecMG accurately predicts accesses to embedding vectors with long reuse distances or few reuses. The design of RecMG focuses on making ML feasible in the context of DLRM inference by addressing unique challenges in data labeling and navigating the search space for embedding-vector placement. By employing separate ML models for caching and prefetching, plus a novel differentiable loss function, RecMG narrows the prefetching search space and minimizes on-demand fetches. Compared to state-of-the-art temporal, spatial, and ML-based prefetchers, RecMG reduces on-demand fetches by 2.2x, 2.8x, and 1.5x, respectively. In industrial-scale DLRM inference scenarios, RecMG effectively reduces end-to-end DLRM inference time by up to 43%.

Country of Origin
🇺🇸 United States

Page Count
17 pages

Category
Computer Science:
Performance