Flash Multi-Head Feed-Forward Network
By: Minshen Zhang , Xiang Hu , Jianguo Li and more
Potential Business Impact:
Makes AI smarter and faster using less memory.
We explore Multi-Head FFN (MH-FFN) as a replacement of FFN in the Transformer architecture, motivated by the structural similarity between single-head attention and FFN. While multi-head mechanisms enhance expressivity in attention, naively applying them to FFNs faces two challenges: memory consumption scaling with the head count, and an imbalanced ratio between the growing intermediate size and the fixed head dimension as models scale, which degrades scalability and expressive power. To address these challenges, we propose Flash Multi-Head FFN (FlashMHF), with two key innovations: an I/O-aware fused kernel computing outputs online in SRAM akin to FlashAttention, and a design using dynamically weighted parallel sub-networks to maintain a balanced ratio between intermediate and head dimensions. Validated on models from 128M to 1.3B parameters, FlashMHF consistently improves perplexity and downstream task accuracy over SwiGLU FFNs, while reducing peak memory usage by 3-5x and accelerating inference by up to 1.08x. Our work establishes the multi-head design as a superior architectural principle for FFNs, presenting FlashMHF as a powerful, efficient, and scalable alternative to FFNs in Transformers.
Similar Papers
Mixture-of-Channels: Exploiting Sparse FFNs for Efficient LLMs Pre-Training and Inference
Machine Learning (CS)
Makes AI models use less computer memory.
Attention Is Not All You Need: The Importance of Feedforward Networks in Transformer Models
Computation and Language
Makes AI learn better with fewer parts.
Fractional neural attention for efficient multiscale sequence processing
Machine Learning (CS)
Makes AI smarter by copying how brains pay attention.