Monocular Person Localization under Camera Ego-motion
By: Yu Zhan, Hanjing Ye, Hong Zhang
Potential Business Impact:
Helps robots find people even when moving fast.
Localizing a person from a moving monocular camera is critical for Human-Robot Interaction (HRI). To estimate the 3D human position from a 2D image, existing methods either depend on the geometric assumption of a fixed camera or use a position regression model trained on datasets containing little camera ego-motion. These methods are vulnerable to fierce camera ego-motion, resulting in inaccurate person localization. We consider person localization as a part of a pose estimation problem. By representing a human with a four-point model, our method jointly estimates the 2D camera attitude and the person's 3D location through optimization. Evaluations on both public datasets and real robot experiments demonstrate our method outperforms baselines in person localization accuracy. Our method is further implemented into a person-following system and deployed on an agile quadruped robot.
Similar Papers
Bring Your Rear Cameras for Egocentric 3D Human Pose Estimation
CV and Pattern Recognition
Lets virtual characters copy your full body movements.
Self-localization on a 3D map by fusing global and local features from a monocular camera
Robotics
Helps self-driving cars see better with people.
Ego4o: Egocentric Human Motion Capture and Understanding from Multi-Modal Input
CV and Pattern Recognition
Tracks body movements using everyday gadgets.