Subjective Depth and Timescale Transformers: Learning Where and When to Compute
By: Frederico Wieser , Martin Benfeghoul , Haitham Bou Ammar and more
Potential Business Impact:
Makes AI smarter by letting it skip boring parts.
The rigid, uniform allocation of computation in standard Transformer (TF) architectures can limit their efficiency and scalability, particularly for large-scale models and long sequences. Addressing this, we introduce Subjective Depth Transformers (SDT) and Subjective Timescale Transformers (STT), two distinct architectures that leverage Bayesian surprise signals to dynamically route computation, learning where and when to compute within decoder-only TFs. SDT augments a decoder-only stack with alternating Decision and Dynamic layers: a Decision layer computes a full block 'posterior' and a lightweight 'prior,' while a Dynamic layer employs fixed-capacity Top-K routing based on Bayesian surprise (Expected and Unexpected Change), maintaining a static compute graph. STT extends this conditional computation to the temporal domain: a transition network predicts residual updates, forming a temporal 'change hypothesis' that informs a router to dynamically execute or bypass TF blocks for each token, managing KV-cache contributions. Both architectures exhibit the predicted shift from novelty to prediction driven gating over training, suggesting alignment with surprise based principles. While operating at reduced capacity, they offer preliminary insights into the compute-accuracy trade-offs of conditional computation. The proposed architectures establish a flexible framework for efficiency, reducing self-attention computation by 75% and KV-cache requirements by 50% within each compute skipping layer, setting a pathway for more efficient models.
Similar Papers
Learning Spatial Decay for Vision Transformers
CV and Pattern Recognition
Makes computer vision better at understanding pictures.
Three-dimensional attention Transformer for state evaluation in real-time strategy games
Machine Learning (CS)
Helps game players understand battles faster and better.
STAS: Spatio-Temporal Adaptive Computation Time for Spiking Transformers
Machine Learning (CS)
Makes AI see faster and use less power.