Rethinking Dense Linear Transformations: Stagewise Pairwise Mixing (SPM) for Near-Linear Training in Neural Networks
By: Peter Farag
Dense linear layers are a dominant source of computational and parametric cost in modern machine learning models, despite their quadratic complexity and often being misaligned with the compositional structure of learned representations. We introduce Stagewise Pairwise Mixers (SPM), a structured linear operator that replaces dense matrices with a composition of sparse pairwise-mixing stages. An SPM layer implements a global linear transformation in $O(nL)$ time with $O(nL)$ parameters, where $L$ is typically constant or $log_2n$, and admits exact closed-form forward and backward computations. SPM is designed as a drop-in replacement for dense linear layers in feedforward networks, recurrent architectures, attention mechanisms, etc. We derive complete forward and backward expressions for two parameterizations: an orthogonal norm-preserving rotation-based variant and a fully general $2 \times 2$ mixing variant. Beyond computational savings, the stagewise structure of SPM induces an explicit compositional inductive bias that constrains model capacity and improves generalization when aligned with task structure. We present proof-of-concept experiments demonstrating substantial reductions in wall-clock cost and improved accuracy on structured learning problems, while retaining competitive performance on real-world benchmarks.
Similar Papers
Element-wise Modulation of Random Matrices for Efficient Neural Layers
Machine Learning (CS)
Makes computer brains smaller and faster.
SHIRO: Near-Optimal Communication Strategies for Distributed Sparse Matrix Multiplication
Distributed, Parallel, and Cluster Computing
Makes computer math faster by sending less data.
SparseST: Exploiting Data Sparsity in Spatiotemporal Modeling and Prediction
Machine Learning (CS)
Makes smart AI work on small, cheap devices.