Score: 1

Adaptive Pareto-Optimal Token Merging for Edge Transformer Models in Semantic Communication

Published: September 11, 2025 | arXiv ID: 2509.09168v1

By: Omar Erak , Omar Alhussein , Hatem Abou-Zeid and more

Potential Business Impact:

Lets AI understand pictures faster on phones.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large-scale transformer models have emerged as a powerful tool for semantic communication systems, enabling edge devices to extract rich representations for robust inference across noisy wireless channels. However, their substantial computational demands remain a major barrier to practical deployment in resource-constrained 6G networks. In this paper, we present a training-free framework for adaptive token merging in pretrained vision transformers to jointly reduce inference time and transmission resource usage. We formulate the selection of per-layer merging proportions as a multi-objective optimization problem to balance accuracy and computational cost. We employ Gaussian process-based Bayesian optimization to construct a Pareto frontier of optimal configurations, enabling flexible runtime adaptation to dynamic application requirements and channel conditions. Extensive experiments demonstrate that our method consistently outperforms other baselines and achieves significant reductions in floating-point operations while maintaining competitive accuracy across a wide range of signal-to-noise ratio (SNR) conditions. Additional results highlight the effectiveness of adaptive policies that adjust merging aggressiveness in response to channel quality, providing a practical mechanism to trade off latency and semantic fidelity on demand. These findings establish a scalable and efficient approach for deploying transformer-based semantic communication in future edge intelligence systems.

Country of Origin
🇫🇮 🇦🇪 🇨🇦 Finland, Canada, United Arab Emirates

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)