Scalable Non-Equivariant 3D Molecule Generation via Rotational Alignment
By: Yuhui Ding, Thomas Hofmann
Potential Business Impact:
Makes computers design new molecules faster.
Equivariant diffusion models have achieved impressive performance in 3D molecule generation. These models incorporate Euclidean symmetries of 3D molecules by utilizing an SE(3)-equivariant denoising network. However, specialized equivariant architectures limit the scalability and efficiency of diffusion models. In this paper, we propose an approach that relaxes such equivariance constraints. Specifically, our approach learns a sample-dependent SO(3) transformation for each molecule to construct an aligned latent space. A non-equivariant diffusion model is then trained over the aligned representations. Experimental results demonstrate that our approach performs significantly better than previously reported non-equivariant models. It yields sample quality comparable to state-of-the-art equivariant diffusion models and offers improved training and sampling efficiency. Our code is available at https://github.com/skeletondyh/RADM
Similar Papers
Equivariant Neural Diffusion for Molecule Generation
Machine Learning (CS)
Builds new molecules that fit perfectly.
Frame-based Equivariant Diffusion Models for 3D Molecular Generation
Machine Learning (CS)
Designs new molecules faster and more accurately.
Straight-Line Diffusion Model for Efficient 3D Molecular Generation
Machine Learning (CS)
Makes computers design new molecules 100 times faster.