Equivariant Deep Equilibrium Models for Imaging Inverse Problems
By: Alexander Mehta , Ruangrawee Kitichotkul , Vivek K Goyal and more
Potential Business Impact:
Trains AI to fix images without perfect examples.
Equivariant imaging (EI) enables training signal reconstruction models without requiring ground truth data by leveraging signal symmetries. Deep equilibrium models (DEQs) are a powerful class of neural networks where the output is a fixed point of a learned operator. However, training DEQs with complex EI losses requires implicit differentiation through fixed-point computations, whose implementation can be challenging. We show that backpropagation can be implemented modularly, simplifying training. Experiments demonstrate that DEQs trained with implicit differentiation outperform those trained with Jacobian-free backpropagation and other baseline methods. Additionally, we find evidence that EI-trained DEQs approximate the proximal map of an invariant prior.
Similar Papers
Reversible Deep Equilibrium Models
Machine Learning (CS)
Makes AI learn better with fewer steps.
DDEQs: Distributional Deep Equilibrium Models through Wasserstein Gradient Flows
Machine Learning (CS)
Helps computers understand shapes and groups of dots.
Gradient flow for deep equilibrium single-index models
Machine Learning (CS)
Makes super-deep computer brains learn faster.