Human Action Recognition from Point Clouds over Time
By: James Dickens
Potential Business Impact:
Lets computers understand actions from 3D camera data.
Recent research into human action recognition (HAR) has focused predominantly on skeletal action recognition and video-based methods. With the increasing availability of consumer-grade depth sensors and Lidar instruments, there is a growing opportunity to leverage dense 3D data for action recognition, to develop a third way. This paper presents a novel approach for recognizing actions from 3D videos by introducing a pipeline that segments human point clouds from the background of a scene, tracks individuals over time, and performs body part segmentation. The method supports point clouds from both depth sensors and monocular depth estimation. At the core of the proposed HAR framework is a novel backbone for 3D action recognition, which combines point-based techniques with sparse convolutional networks applied to voxel-mapped point cloud sequences. Experiments incorporate auxiliary point features including surface normals, color, infrared intensity, and body part parsing labels, to enhance recognition accuracy. Evaluation on the NTU RGB- D 120 dataset demonstrates that the method is competitive with existing skeletal action recognition algorithms. Moreover, combining both sensor-based and estimated depth inputs in an ensemble setup, this approach achieves 89.3% accuracy when different human subjects are considered for training and testing, outperforming previous point cloud action recognition methods.
Similar Papers
Human Action Recognition from Point Clouds over Time
CV and Pattern Recognition
Helps computers understand human movements from 3D scans.
A Real-Time Human Action Recognition Model for Assisted Living
CV and Pattern Recognition
Spots elderly falls and pain using cameras.
LiDAR-based Human Activity Recognition through Laplacian Spectral Analysis
CV and Pattern Recognition
Lets computers see people's actions without cameras.