GraphEnet: Event-driven Human Pose Estimation with a Graph Neural Network
By: Gaurvi Goyal , Pham Cong Thuong , Arren Glover and more
Potential Business Impact:
Lets robots see people's movements quickly.
Human Pose Estimation is a crucial module in human-machine interaction applications and, especially since the rise in deep learning technology, robust methods are available to consumers using RGB cameras and commercial GPUs. On the other hand, event-based cameras have gained popularity in the vision research community for their low latency and low energy advantages that make them ideal for applications where those resources are constrained like portable electronics and mobile robots. In this work we propose a Graph Neural Network, GraphEnet, that leverages the sparse nature of event camera output, with an intermediate line based event representation, to estimate 2D Human Pose of a single person at a high frequency. The architecture incorporates a novel offset vector learning paradigm with confidence based pooling to estimate the human pose. This is the first work that applies Graph Neural Networks to event data for Human Pose Estimation. The code is open-source at https://github.com/event-driven-robotics/GraphEnet-NeVi-ICCV2025.
Similar Papers
Graph-based 3D Human Pose Estimation using WiFi Signals
CV and Pattern Recognition
Lets WiFi see your body's shape.
NanoHTNet: Nano Human Topology Network for Efficient 3D Human Pose Estimation
CV and Pattern Recognition
Makes 3D body tracking work on small devices.
HGFreNet: Hop-hybrid GraphFomer for 3D Human Pose Estimation with Trajectory Consistency in Frequency Domain
CV and Pattern Recognition
Makes 2D videos show people's real 3D movements.