A Limits Study of Memory-side Tiering Telemetry
By: Vinicius Petrucci, Felippe Zacarias, David Roberts
Potential Business Impact:
Makes computers faster by smarter memory use.
Increasing workload demands and emerging technologies necessitate the use of various memory and storage tiers in computing systems. This paper presents results from a CXL-based Experimental Memory Request Logger that reveals precise memory access patterns at runtime without interfering with the running workloads. We use it for software emulation of future memory telemetry hardware. By combining reactive placement based on data address monitoring, proactive data movement, and compiler hints, a Hotness Monitoring Unit (HMU) within memory modules can greatly improve memory tiering solutions. Analysis of page placement using profiled access counts on a Deep Learning Recommendation Model (DLRM) indicates a potential 1.94x speedup over Linux NUMA balancing tiering, and only a 3% slowdown compared to Host-DRAM allocation while offloading over 90% of pages to CXL memory. The study underscores the limitations of existing tiering strategies in terms of coverage and accuracy, and makes a strong case for programmable, device-level telemetry as a scalable and efficient solution for future memory systems.
Similar Papers
From Good to Great: Improving Memory Tiering Performance Through Parameter Tuning
Operating Systems
Makes computer memory faster by learning what data is used.
ARMS: Adaptive and Robust Memory Tiering System
Operating Systems
Makes computers faster by smartly moving data.
Tidying Up the Address Space
Operating Systems
Cleans up computer memory, saving space and speed.