ParaFormer: Shallow Parallel Transformers with Progressive Approximation
By: Wei Wang, Xiao-Yong Wei, Qing Li
Potential Business Impact:
Makes AI models faster and smaller.
The widespread 'deeper is better' philosophy has driven the creation of architectures like ResNet and Transformer, which achieve high performance by stacking numerous layers. However, increasing model depth comes with challenges such as longer training times, higher inference latency, and impracticality on resource-constrained devices. To address these issues, we propose ParaFormer, a shallow Transformer architecture designed for true parallelism in both structure and computation. By formulating standard Transformers as function approximators in closed-form, our theoretical analysis shows that their performance relies on inter-layer collaboration for progressive approximation, rather than depth itself. While deep Transformers enforce this collaboration through sequential designs, we demonstrate that such collaboration is not inherently tied to sequential structures. ParaFormer removes the sequential constraint by organizing layers into parallel branches, enforcing inter-layer collaboration algorithmically. Specifically, we implement progressive approximation, ensuring that each new branch further reduces the loss from preceding branches, enabling faster convergence. Extensive experiments validate ParaFormer's effectiveness, outperforming standard Transformers like ViT. Moreover, ParaFormer supports up to 15.07x model compression and facilitates model expansion for adaptive continuous learning. Experimental results on multi-GPU deployment demonstrate that ParaFormer is 3.30x faster than widely used parallelism solutions such as FairScale. These advancements stem from our closed-form formulation of Transformers based on the Universal Approximation Theorem, which not only explains the ``depth belief'' but also opens new avenues for designing efficient Transformer architectures. Source code: https://(open-upon-acceptance)
Similar Papers
CoFormer: Collaborating with Heterogeneous Edge Devices for Scalable Transformer Inference
Distributed, Parallel, and Cluster Computing
Lets big AI models run on small devices.
FlatFormer: A Flat Transformer Knowledge Tracing Model Based on Cognitive Bias Injection
Artificial Intelligence
Helps computers track student learning faster.
UniFormer: Unified and Efficient Transformer for Reasoning Across General and Custom Computing
Distributed, Parallel, and Cluster Computing
Makes AI models work fast on any computer.