Score: 0

Fantasy: Efficient Large-scale Vector Search on GPU Clusters with GPUDirect Async

Published: December 1, 2025 | arXiv ID: 2512.02278v1

By: Yi Liu, Chen Qian

Potential Business Impact:

Speeds up AI by searching huge data faster.

Business Areas:
GPU Hardware

Vector similarity search has become a critical component in AI-driven applications such as large language models (LLMs). To achieve high recall and low latency, GPUs are utilized to exploit massive parallelism for faster query processing. However, as the number of vectors continues to grow, the graph size quickly exceeds the memory capacity of a single GPU, making it infeasible to store and process the entire index on a single GPU. Recent work uses CPU-GPU architectures to keep vectors in CPU memory or SSDs, but the loading step stalls GPU computation. We present Fantasy, an efficient system that pipelines vector search and data transfer in a GPU cluster with GPUDirect Async. Fantasy overlaps computation and network communication to significantly improve search throughput for large graphs and deliver large query batch sizes.

Page Count
5 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing