Amplifying Effective CXL Memory Bandwidth for LLM Inference via Transparent Near-Data Processing
By: Rui Xie , Asad Ul Haq , Linsen Ma and more
Potential Business Impact:
Makes AI models run faster and use less memory.
Large language model (LLM) inference is bottlenecked by the limited bandwidth of CXL-based memory used for capacity expansion. We introduce CXL-NDP, a transparent near-data processing architecture that amplifies effective CXL bandwidth without requiring changes to the CXL.mem interface or AI models. CXL-NDP integrates a precision-scalable bit-plane layout for dynamic quantization with transparent lossless compression of weights and KV caches directly within the CXL device. In end-to-end serving, CXL-NDP improves throughput by 43%, extends the maximum context length by 87%, and reduces the KV cache footprint by 46.9% without accuracy loss. Hardware synthesis confirms its practicality with a modest silicon footprint, lowering the barrier for adopting efficient, scalable CXL-based memory in generative AI infrastructure.
Similar Papers
Amplifying Effective CXL Memory Bandwidth for LLM Inference via Transparent Near-Data Processing
Hardware Architecture
Makes AI models faster and use less memory.
Scalable Processing-Near-Memory for 1M-Token LLM Inference: CXL-Enabled KV-Cache Management Beyond GPU Limits
Hardware Architecture
Lets AI understand much longer stories faster.
Sangam: Chiplet-Based DRAM-PIM Accelerator with CXL Integration for LLM Inferencing
Hardware Architecture
Makes AI models run much faster and cheaper.