DCO: Dynamic Cache Orchestration for LLM Accelerators through Predictive Management
By: Zhongchun Zhou , Chengtao Lai , Yuhang Gu and more
Potential Business Impact:
Makes AI faster by sharing computer memory.
The rapid adoption of large language models (LLMs) is pushing AI accelerators toward increasingly powerful and specialized designs. Instead of further complicating software development with deeply hierarchical scratchpad memories (SPMs) and their asynchronous management, we investigate the opposite point of the design spectrum: a multi-core AI accelerator equipped with a shared system-level cache and application-aware management policies, which keeps the programming effort modest. Our approach exploits dataflow information available in the software stack to guide cache replacement (including dead-block prediction), in concert with bypass decisions and mechanisms that alleviate cache thrashing. We assess the proposal using a cycle-accurate simulator and observe substantial performance gains (up to 1.80x speedup) compared with conventional cache architectures. In addition, we build and validate an analytical model that takes into account the actual overlapping behaviors to extend the measurement results of our policies to real-world larger-scale workloads. Experiment results show that when functioning together, our bypassing and thrashing mitigation strategies can handle scenarios both with and without inter-core data sharing and achieve remarkable speedups. Finally, we implement the design in RTL and the area of our design is $\mathbf{0.064mm^2}$ with 15nm process, which can run at 2 GHz clock frequency. Our findings explore the potential of the shared cache design to assist the development of future AI accelerator systems.
Similar Papers
LLaMCAT: Optimizing Large Language Model Inference with Cache Arbitration and Throttling
Hardware Architecture
Makes AI models run much faster on computers.
Accelerating LLM Inference via Dynamic KV Cache Placement in Heterogeneous Memory System
Hardware Architecture
Makes AI remember more by using faster memory.
Accelerating LLM Inference via Dynamic KV Cache Placement in Heterogeneous Memory System
Hardware Architecture
Makes AI remember more without slowing down.