Improving Facial Rig Semantics for Tracking and Retargeting
By: Dalton Omens , Allise Thurman , Jihun Yu and more
Potential Business Impact:
Makes video game faces move like real people.
In this paper, we consider retargeting a tracked facial performance to either another person or to a virtual character in a game or virtual reality (VR) environment. We remove the difficulties associated with identifying and retargeting the semantics of one rig framework to another by utilizing the same framework (3DMM, FLAME, MetaHuman, etc.) for both subjects. Although this does not constrain the choice of framework when retargeting from one person to another, it does force the tracker to use the game/VR character rig when retargeting to a game/VR character. We utilize volumetric morphing in order to fit facial rigs to both performers and targets; in addition, a carefully chosen set of Simon-Says expressions is used to calibrate each rig to the motion signatures of the relevant performer or target. Although a uniform set of Simon-Says expressions can likely be used for all person to person retargeting, we argue that person to game/VR character retargeting benefits from Simon-Says expressions that capture the distinct motion signature of the game/VR character rig. The Simon-Says calibrated rigs tend to produce the desired expressions when exercising animation controls (as expected). Unfortunately, these well-calibrated rigs still lead to undesirable controls when tracking a performance (a well-behaved function can have an arbitrarily ill-conditioned inverse), even though they typically produce acceptable geometry reconstructions. Thus, we propose a fine-tuning approach that modifies the rig used by the tracker in order to promote the output of more semantically meaningful animation controls, facilitating high efficacy retargeting. In order to better address real-world scenarios, the fine-tuning relies on implicit differentiation so that the tracker can be treated as a (potentially non-differentiable) black box.
Similar Papers
Retargeting Matters: General Motion Retargeting for Humanoid Motion Tracking
Robotics
Makes robot movements look more like human movements.
MoReFlow: Motion Retargeting Learning through Unsupervised Flow Matching
Graphics
Moves one character's dance to another.
MaskSem: Semantic-Guided Masking for Learning 3D Hybrid High-Order Motion Representation
CV and Pattern Recognition
Helps robots understand human movements better.