Fantasy: Efficient Large-scale Vector Search on GPU Clusters with GPUDirect Async
By: Yi Liu, Chen Qian
Potential Business Impact:
Speeds up AI by searching huge data faster.
Vector similarity search has become a critical component in AI-driven applications such as large language models (LLMs). To achieve high recall and low latency, GPUs are utilized to exploit massive parallelism for faster query processing. However, as the number of vectors continues to grow, the graph size quickly exceeds the memory capacity of a single GPU, making it infeasible to store and process the entire index on a single GPU. Recent work uses CPU-GPU architectures to keep vectors in CPU memory or SSDs, but the loading step stalls GPU computation. We present Fantasy, an efficient system that pipelines vector search and data transfer in a GPU cluster with GPUDirect Async. Fantasy overlaps computation and network communication to significantly improve search throughput for large graphs and deliver large query batch sizes.
Similar Papers
Cloud-Native Vector Search: A Comprehensive Performance Analysis
Databases
Finds information faster in the cloud.
Cloud-Native Vector Search: A Comprehensive Performance Analysis
Databases
Finds information faster in the cloud.
Gorgeous: Revisiting the Data Layout for Disk-Resident High-Dimensional Vector Search
Databases
Finds similar things faster on big data.