A Single Architecture for Representing Invariance Under Any Space Group
By: Cindy Y. Zhang , Elif Ertekin , Peter Orbanz and more
Incorporating known symmetries in data into machine learning models has consistently improved predictive accuracy, robustness, and generalization. However, achieving exact invariance to specific symmetries typically requires designing bespoke architectures for each group of symmetries, limiting scalability and preventing knowledge transfer across related symmetries. In the case of the space groups, symmetries critical to modeling crystalline solids in materials science and condensed matter physics, this challenge is particularly salient as there are 230 such groups in three dimensions. In this work we present a new approach to such crystallographic symmetries by developing a single machine learning architecture that is capable of adapting its weights automatically to enforce invariance to any input space group. Our approach is based on constructing symmetry-adapted Fourier bases through an explicit characterization of constraints that group operations impose on Fourier coefficients. Encoding these constraints into a neural network layer enables weight sharing across different space groups, allowing the model to leverage structural similarities between groups and overcome data sparsity when limited measurements are available for specific groups. We demonstrate the effectiveness of this approach in achieving competitive performance on material property prediction tasks and performing zero-shot learning to generalize to unseen groups.
Similar Papers
Symmetries in PAC-Bayesian Learning
Machine Learning (CS)
Makes AI learn better from messy, shifted pictures.
Reinforcement Learning Using known Invariances
Machine Learning (CS)
Teaches robots faster by using their shape.
Symmetry and Generalisation in Neural Approximations of Renormalisation Transformations
Machine Learning (CS)
Makes computers learn patterns in physics better.