Representing spherical tensors with scalar-based machine-learning models
By: Michelangelo Domina , Filippo Bigi , Paolo Pegolo and more
Potential Business Impact:
Makes computers understand 3D shapes better.
Rotational symmetry plays a central role in physics, providing an elegant framework to describe how the properties of 3D objects -- from atoms to the macroscopic scale -- transform under the action of rigid rotations. Equivariant models of 3D point clouds are able to approximate structure-property relations in a way that is fully consistent with the structure of the rotation group, by combining intermediate representations that are themselves spherical tensors. The symmetry constraints however make this approach computationally demanding and cumbersome to implement, which motivates increasingly popular unconstrained architectures that learn approximate symmetries as part of the training process. In this work, we explore a third route to tackle this learning problem, where equivariant functions are expressed as the product of a scalar function of the point cloud coordinates and a small basis of tensors with the appropriate symmetry. We also propose approximations of the general expressions that, while lacking universal approximation properties, are fast, simple to implement, and accurate in practical settings.
Similar Papers
Training Dynamics of Learning 3D-Rotational Equivariance
Machine Learning (CS)
Teaches computers to see 3D shapes perfectly.
SO(3)-Equivariant Neural Networks for Learning Vector Fields on Spheres
Machine Learning (CS)
Helps computers understand weather patterns on Earth.
Permutation Equivariant Neural Networks for Symmetric Tensors
Machine Learning (CS)
Teaches computers to understand patterns in nature.