Score: 0

Amplifying Effective CXL Memory Bandwidth for LLM Inference via Transparent Near-Data Processing

Published: September 3, 2025 | arXiv ID: 2509.03377v1

By: Rui Xie , Asad Ul Haq , Linsen Ma and more

Potential Business Impact:

Makes AI models faster and use less memory.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language model (LLM) inference is bottlenecked by the limited bandwidth of CXL-based memory used for capacity expansion. We introduce CXL-NDP, a transparent near-data processing architecture that amplifies effective CXL bandwidth without requiring changes to the CXL.mem interface or AI models. CXL-NDP integrates a precision-scalable bit-plane layout for dynamic quantization with transparent lossless compression of weights and KV caches directly within the CXL device. In end-to-end serving, CXL-NDP improves throughput by 43%, extends the maximum context length by 87%, and reduces the KV cache footprint by 46.9% without accuracy loss. Hardware synthesis confirms its practicality with a modest silicon footprint, lowering the barrier for adopting efficient, scalable CXL-based memory in generative AI infrastructure.

Page Count
13 pages

Category
Computer Science:
Hardware Architecture