TAP-ViTs: Task-Adaptive Pruning for On-Device Deployment of Vision Transformers
By: Zhibo Wang , Zuoyuan Zhang , Xiaoyi Pang and more
Potential Business Impact:
Makes smart cameras work on small phones.
Vision Transformers (ViTs) have demonstrated strong performance across a wide range of vision tasks, yet their substantial computational and memory demands hinder efficient deployment on resource-constrained mobile and edge devices. Pruning has emerged as a promising direction for reducing ViT complexity. However, existing approaches either (i) produce a single pruned model shared across all devices, ignoring device heterogeneity, or (ii) rely on fine-tuning with device-local data, which is often infeasible due to limited on-device resources and strict privacy constraints. As a result, current methods fall short of enabling task-customized ViT pruning in privacy-preserving mobile computing settings. This paper introduces TAP-ViTs, a novel task-adaptive pruning framework that generates device-specific pruned ViT models without requiring access to any raw local data. Specifically, to infer device-level task characteristics under privacy constraints, we propose a Gaussian Mixture Model (GMM)-based metric dataset construction mechanism. Each device fits a lightweight GMM to approximate its private data distribution and uploads only the GMM parameters. Using these parameters, the cloud selects distribution-consistent samples from public data to construct a task-representative metric dataset for each device. Based on this proxy dataset, we further develop a dual-granularity importance evaluation-based pruning strategy that jointly measures composite neuron importance and adaptive layer importance, enabling fine-grained, task-aware pruning tailored to each device's computational budget. Extensive experiments across multiple ViT backbones and datasets demonstrate that TAP-ViTs consistently outperforms state-of-the-art pruning methods under comparable compression ratios.
Similar Papers
HEART-VIT: Hessian-Guided Efficient Dynamic Attention and Token Pruning in Vision Transformer
CV and Pattern Recognition
Makes AI image tools faster and use less power.
A Distributed Framework for Privacy-Enhanced Vision Transformers on the Edge
Distributed, Parallel, and Cluster Computing
Keeps your pictures private when using smart apps.
Back to Fundamentals: Low-Level Visual Features Guided Progressive Token Pruning
CV and Pattern Recognition
Makes AI see details with less computer power.