Real-Time Manipulation Action Recognition with a Factorized Graph Sequence Encoder
By: Enes Erdogan, Eren Erdal Aksoy, Sanem Sariel
Potential Business Impact:
Helps robots understand and copy human actions.
Recognition of human manipulation actions in real-time is essential for safe and effective human-robot interaction and collaboration. The challenge lies in developing a model that is both lightweight enough for real-time execution and capable of generalization. While some existing methods in the literature can run in real-time, they struggle with temporal scalability, i.e., they fail to adapt to long-duration manipulations effectively. To address this, leveraging the generalizable scene graph representations, we propose a new Factorized Graph Sequence Encoder network that not only runs in real-time but also scales effectively in the temporal dimension, thanks to its factorized encoder architecture. Additionally, we introduce Hand Pooling operation, a simple pooling operation for more focused extraction of the graph-level embeddings. Our model outperforms the previous state-of-the-art real-time approach, achieving a 14.3\% and 5.6\% improvement in F1-macro score on the KIT Bimanual Action (Bimacs) Dataset and Collaborative Action (CoAx) Dataset, respectively. Moreover, we conduct an extensive ablation study to validate our network design choices. Finally, we compare our model with its architecturally similar RGB-based model on the Bimacs dataset and show the limitations of this model in contrast to ours on such an object-centric manipulation dataset.
Similar Papers
Biomechanically consistent real-time action recognition for human-robot interaction
Robotics
Helps robots understand what people are doing.
Beyond Sequences: A Benchmark for Atomic Hand-Object Interaction Using a Static RNN Encoder
CV and Pattern Recognition
Helps robots understand how hands grab things.
Gaze-Guided 3D Hand Motion Prediction for Detecting Intent in Egocentric Grasping Tasks
CV and Pattern Recognition
Helps robots guess what hand you'll move next.