SimDiff: Simulator-constrained Diffusion Model for Physically Plausible Motion Generation
By: Akihisa Watanabe , Jiawei Ren , Li Siyao and more
Potential Business Impact:
Makes computer characters move realistically in games.
Generating physically plausible human motion is crucial for applications such as character animation and virtual reality. Existing approaches often incorporate a simulator-based motion projection layer to the diffusion process to enforce physical plausibility. However, such methods are computationally expensive due to the sequential nature of the simulator, which prevents parallelization. We show that simulator-based motion projection can be interpreted as a form of guidance, either classifier-based or classifier-free, within the diffusion process. Building on this insight, we propose SimDiff, a Simulator-constrained Diffusion Model that integrates environment parameters (e.g., gravity, wind) directly into the denoising process. By conditioning on these parameters, SimDiff generates physically plausible motions efficiently, without repeated simulator calls at inference, and also provides fine-grained control over different physical coefficients. Moreover, SimDiff successfully generalizes to unseen combinations of environmental parameters, demonstrating compositional generalization.
Similar Papers
SimDiff: Simpler Yet Better Diffusion Model for Time Series Point Forecasting
Artificial Intelligence
Predicts future numbers more accurately than before.
MotionDiff: Training-free Zero-shot Interactive Motion Editing via Flow-assisted Multi-view Diffusion
CV and Pattern Recognition
Changes videos of objects moving in many ways.
ConDiSim: Conditional Diffusion Models for Simulation Based Inference
Machine Learning (CS)
Helps computers guess hidden answers from messy data.