Score: 2

Reversible Deep Equilibrium Models

Published: September 16, 2025 | arXiv ID: 2509.12917v1

By: Sam McCallum, Kamran Arora, James Foster

Potential Business Impact:

Makes AI learn better with fewer steps.

Business Areas:
Quantum Computing Science and Engineering

Deep Equilibrium Models (DEQs) are an interesting class of implicit model where the model output is implicitly defined as the fixed point of a learned function. These models have been shown to outperform explicit (fixed-depth) models in large-scale tasks by trading many deep layers for a single layer that is iterated many times. However, gradient calculation through DEQs is approximate. This often leads to unstable training dynamics and requires regularisation or many function evaluations to fix. Here, we introduce Reversible Deep Equilibrium Models (RevDEQs) that allow for exact gradient calculation, no regularisation and far fewer function evaluations than DEQs. We show that RevDEQs achieve state-of-the-art performance on language modelling and image classification tasks against comparable implicit and explicit models.

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
Machine Learning (CS)