JSPIM: A Skew-Aware PIM Accelerator for High-Performance Databases Join and Select Operations
By: Sabiha Tajdari, Anastasia Ailamaki, Sandhya Dwarkadas
Potential Business Impact:
Speeds up searching data in computers 400x.
Database applications are increasingly bottlenecked by memory bandwidth and latency due to the memory wall and the limited scalability of DRAM. Join queries, central to analytical workloads, require intensive memory access and are particularly vulnerable to inefficiencies in data movement. While Processing-in-Memory (PIM) offers a promising solution, existing designs typically reuse CPU-oriented join algorithms, limiting parallelism and incurring costly inter-chip communication. Additionally, data skew, a main challenge in CPU-based joins, remains unresolved in current PIM architectures. We introduce JSPIM, a PIM module that accelerates hash join and, by extension, corresponding select queries through algorithm-hardware co-design. JSPIM deploys parallel search engines within each subarray and redesigns hash tables to achieve O(1) lookups, fully exploiting PIM's fine-grained parallelism. To mitigate skew, our design integrates subarray-level parallelism with rank-level processing, eliminating redundant off-chip transfers. Evaluations show JSPIM delivers 400x to 1000x speedup on join queries versus DuckDB. When paired with DuckDB for the full SSB benchmark, JSPIM achieves an overall 2.5x throughput improvement (individual query gains of 1.1x to 28x), at just a 7% data overhead and 2.1% per-rank PIM-enabled chip area increase.
Similar Papers
Membrane: Accelerating Database Analytics with Bank-Level DRAM-PIM Filtering
Hardware Architecture
Makes computers faster by doing work inside memory.
PIMDAL: Mitigating the Memory Bottleneck in Data Analytics using a Real Processing-in-Memory System
Hardware Architecture
Makes computers find data much faster.
DL-PIM: Improving Data Locality in Processing-in-Memory Systems
Hardware Architecture
Moves computer data closer for faster work.