Pickle Prefetcher: Programmable and Scalable Last-Level Cache Prefetcher
By: Hoa Nguyen , Pongstorn Maidee , Jason Lowe-Power and more
Potential Business Impact:
Helps computers fetch data faster for tricky tasks.
Modern high-performance architectures employ large last-level caches (LLCs). While large LLCs can reduce average memory access latency for workloads with a high degree of locality, they can also increase latency for workloads with irregular memory access patterns. Prefetchers are widely used to reduce memory latency by prefetching data into the cache hierarchy before it is accessed by the core. However, existing prediction-based prefetchers often struggle with irregular memory access patterns, which are especially prevalent in modern applications. This paper introduces the Pickle Prefetcher, a programmable and scalable LLC prefetcher designed to handle independent irregular memory access patterns effectively. Instead of relying on static heuristics or complex prediction algorithms, Pickle Prefetcher allows software to define its own prefetching strategies using a simple programming interface without expanding the instruction set architecture (ISA). By trading the logic complexity of hardware prediction for software programmability, Pickle Prefetcher can adapt to a wide range of access patterns without requiring extensive hardware resources for prediction. This allows the prefetcher to dedicate its resources to scheduling and issuing timely prefetch requests. Graph applications are an example where the memory access pattern is irregular but easily predictable by software. Through extensive evaluations of the Pickle Prefetcher on gem5 full-system simulations, we demonstrate tha Pickle Prefetcher significantly outperforms traditional prefetching techniques. Our results show that Pickle Prefetcher achieves speedups of up to 1.74x on the GAPBS breadth-first search (BFS) implementation over a baseline system. When combined with private cache prefetchers, Pickle Prefetcher provides up to a 1.40x speedup over systems using only private cache prefetchers.
Similar Papers
Coordinated Reinforcement Learning Prefetching Architecture for Multicore Systems
Distributed, Parallel, and Cluster Computing
Makes computer memory faster on many cores.
SLOFetch: Compressed-Hierarchical Instruction Prefetching for Cloud Microservices
Machine Learning (CS)
Makes computer programs run faster and use less power.
SLOFetch: Compressed-Hierarchical Instruction Prefetching for Cloud Microservices
Machine Learning (CS)
Makes computer programs run faster and use less power.