Lost in Latent Space: An Empirical Study of Latent Diffusion Models for Physics Emulation
By: François Rozet , Ruben Ohana , Michael McCabe and more
Potential Business Impact:
Makes computer simulations run 1000x faster.
The steep computational cost of diffusion models at inference hinders their use as fast physics emulators. In the context of image and video generation, this computational drawback has been addressed by generating in the latent space of an autoencoder instead of the pixel space. In this work, we investigate whether a similar strategy can be effectively applied to the emulation of dynamical systems and at what cost. We find that the accuracy of latent-space emulation is surprisingly robust to a wide range of compression rates (up to 1000x). We also show that diffusion-based emulators are consistently more accurate than non-generative counterparts and compensate for uncertainty in their predictions with greater diversity. Finally, we cover practical design choices, spanning from architectures to optimizers, that we found critical to train latent-space emulators.
Similar Papers
Generative Latent Diffusion for Efficient Spatiotemporal Data Reduction
Machine Learning (CS)
Saves space by smartly guessing missing video parts.
Improving the Diffusability of Autoencoders
CV and Pattern Recognition
Makes AI create clearer, better pictures and videos.
Latent Diffusion Inversion Requires Understanding the Latent Space
Machine Learning (CS)
Finds hidden personal data in AI art.