Towards Arbitrary Motion Completing via Hierarchical Continuous Representation
By: Chenghao Xu , Guangtao Lyu , Qi Liu and more
Physical motions are inherently continuous, and higher camera frame rates typically contribute to improved smoothness and temporal coherence. For the first time, we explore continuous representations of human motion sequences, featuring the ability to interpolate, inbetween, and even extrapolate any input motion sequences at arbitrary frame rates. To achieve this, we propose a novel parametric activation-induced hierarchical implicit representation framework, referred to as NAME, based on Implicit Neural Representations (INRs). Our method introduces a hierarchical temporal encoding mechanism that extracts features from motion sequences at multiple temporal scales, enabling effective capture of intricate temporal patterns. Additionally, we integrate a custom parametric activation function, powered by Fourier transformations, into the MLP-based decoder to enhance the expressiveness of the continuous representation. This parametric formulation significantly augments the model's ability to represent complex motion behaviors with high accuracy. Extensive evaluations across several benchmark datasets demonstrate the effectiveness and robustness of our proposed approach.
Similar Papers
HEIR: Learning Graph-Based Motion Hierarchies
CV and Pattern Recognition
Learns how things move by breaking it down.
Flow-Guided Implicit Neural Representation for Motion-Aware Dynamic MRI Reconstruction
CV and Pattern Recognition
Makes blurry MRI scans sharp and clear.
Mem-MLP: Real-Time 3D Human Motion Generation from Sparse Inputs
CV and Pattern Recognition
Makes virtual bodies move like real ones.