Unconditional Human Motion and Shape Generation via Balanced Score-Based Diffusion
By: David Björkstrand , Tiesheng Wang , Lars Bretzner and more
Potential Business Impact:
Makes computer-made people move more realistically.
Recent work has explored a range of model families for human motion generation, including Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and diffusion-based models. Despite their differences, many methods rely on over-parameterized input features and auxiliary losses to improve empirical results. These strategies should not be strictly necessary for diffusion models to match the human motion distribution. We show that on par with state-of-the-art results in unconditional human motion generation are achievable with a score-based diffusion model using only careful feature-space normalization and analytically derived weightings for the standard L2 score-matching loss, while generating both motion and shape directly, thereby avoiding slow post hoc shape recovery from joints. We build the method step by step, with a clear theoretical motivation for each component, and provide targeted ablations demonstrating the effectiveness of each proposed addition in isolation.
Similar Papers
Back to Basics: Motion Representation Matters for Human Motion Generation Using Diffusion Model
CV and Pattern Recognition
Makes computer-generated dancing look more real.
Object-Aware 4D Human Motion Generation
CV and Pattern Recognition
Makes videos of people move realistically with objects.
Biomechanics-Guided Residual Approach to Generalizable Human Motion Generation and Estimation
CV and Pattern Recognition
Makes computer characters move like real people.