Drawback of Enforcing Equivariance and its Compensation via the Lens of Expressive Power
By: Yuzhu Chen , Tian Qin , Xinmei Tian and more
Potential Business Impact:
Makes smart computer programs learn better with less data.
Equivariant neural networks encode symmetry as an inductive bias and have achieved strong empirical performance in wide domains. However, their expressive power remains not well understood. Focusing on 2-layer ReLU networks, this paper investigates the impact of equivariance constraints on the expressivity of equivariant and layer-wise equivariant networks. By examining the boundary hyperplanes and the channel vectors of ReLU networks, we construct an example showing that equivariance constraints could strictly limit expressive power. However, we demonstrate that this drawback can be compensated via enlarging the model size. Furthermore, we show that despite a larger model size, the resulting architecture could still correspond to a hypothesis space with lower complexity, implying superior generalizability for equivariant networks.
Similar Papers
A Tale of Two Symmetries: Exploring the Loss Landscape of Equivariant Models
Machine Learning (CS)
Makes smart computers learn better by fixing their rules.
On Universality Classes of Equivariant Networks
Machine Learning (CS)
Makes AI learn better by understanding shapes.
On Universality of Deep Equivariant Networks
Machine Learning (Stat)
Makes AI learn more things with less data.