Hyperion: Low-Latency Ultra-HD Video Analytics via Collaborative Vision Transformer Inference
By: Linyi Jiang , Yifei Zhu , Hao Yin and more
Recent advancements in array-camera videography enable real-time capturing of ultra-high-definition (Ultra-HD) videos, providing rich visual information in a large field of view. However, promptly processing such data using state-of-the-art transformer-based vision foundation models faces significant computational overhead in on-device computing or transmission overhead in cloud computing. In this paper, we present Hyperion, the first cloud-device collaborative framework that enables low-latency inference on Ultra-HD vision data using off-the-shelf vision transformers over dynamic networks. Hyperion addresses the computational and transmission bottleneck of Ultra-HD vision transformers by exploiting the intrinsic property in vision Transformer models. Specifically, Hyperion integrates a collaboration-aware importance scorer that identifies critical regions at the patch level, a dynamic scheduler that adaptively adjusts patch transmission quality to balance latency and accuracy under dynamic network conditions, and a weighted ensembler that fuses edge and cloud results to improve accuracy. Experimental results demonstrate that Hyperion enhances frame processing rate by up to 1.61 times and improves the accuracy by up to 20.2% when compared with state-of-the-art baselines under various network environments.
Similar Papers
A Distributed Framework for Privacy-Enhanced Vision Transformers on the Edge
Distributed, Parallel, and Cluster Computing
Keeps your pictures private when using smart apps.
Janus: Collaborative Vision Transformer Under Dynamic Network Environment
Distributed, Parallel, and Cluster Computing
Lets smart cameras work fast anywhere.
UltraGen: High-Resolution Video Generation with Hierarchical Attention
CV and Pattern Recognition
Makes computer videos in super clear, big sizes.