ViFusion: In-Network Tensor Fusion for Scalable Video Feature Indexing
By: Yisu Wang , Yixiang Zhu , Xinjiao Li and more
Potential Business Impact:
Speeds up finding videos by 8 to 22 times.
Large-scale video feature indexing in datacenters is critically dependent on efficient data transfer. Although in-network computation has emerged as a compelling strategy for accelerating feature extraction and reducing overhead in distributed multimedia systems, harnessing advanced networking resources at both the switch and host levels remains a formidable challenge. These difficulties are compounded by heterogeneous hardware, diverse application requirements, and complex multipath topologies. Existing methods focus primarily on optimizing inference for large neural network models using specialized collective communication libraries, which often face performance degradation in network congestion scenarios. To overcome these limitations, we present ViFusion, a communication aware tensor fusion framework that streamlines distributed video indexing by merging numerous small feature tensors into consolidated and more manageable units. By integrating an in-network computation module and a dedicated tensor fusion mechanism within datacenter environments, ViFusion substantially improves the efficiency of video feature indexing workflows. The deployment results show that ViFusion improves the throughput of the video retrieval system by 8--22 times with the same level of latency as state-of-the-art systems.
Similar Papers
Generating, Fast and Slow: Scalable Parallel Video Generation with Video Interface Networks
CV and Pattern Recognition
Makes computers create longer, smoother videos faster.
INFNet: A Task-aware Information Flow Network for Large-Scale Recommendation Systems
Information Retrieval
Helps online ads show better things you'll like.
MultiTaskVIF: Segmentation-oriented visible and infrared image fusion via multi-task learning
CV and Pattern Recognition
Combines two pictures to show more detail.