Score: 2

UniFormer: Unified and Efficient Transformer for Reasoning Across General and Custom Computing

Published: November 11, 2025 | arXiv ID: 2511.08135v1

By: Zhuoheng Ran , Chong Wu , Renjie Xu and more

Potential Business Impact:

Makes AI models work fast on any computer.

Business Areas:
Application Specific Integrated Circuit (ASIC) Hardware

The success of neural networks such as convolutional neural networks (CNNs) has been largely attributed to their effective and widespread deployment on customised computing platforms, including field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). In the current era, Transformer-based architectures underpin the majority of state-of-the-art (SOTA) larger models that are also increasingly deployed on customised computing hardware for low-power and real-time applications. However, the fundamentally different parallel computation paradigms between general-purpose and customised computing often lead to compromises in model transfer and deployability, which typically come at the cost of complexity, efficiency or accuracy. Moreover, many cross-platform optimisation principles have also remained underexplored in existing studies. This paper introduces UniFormer, a unified and efficient Transformer architecture for both general-purpose and customised computing platforms. By enabling higher parallelism and compute-storage fusion, UniFormer achieves state-of-the-art (SOTA) accuracy and latency on GPUs while exhibiting strong adaptability on FPGAs. To the best of our knowledge, this paper is the first efficient Transformer work that jointly considers both general-purpose and customised computing architectures.

Country of Origin
🇨🇳 🇭🇰 China, Hong Kong

Page Count
14 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing