Detecting Informative Channels: ActionFormer
By: Kunpeng Zhao, Asahi Miyazaki, Tsuyoshi Okita
Potential Business Impact:
Helps computers understand body movements from sensor data.
Human Activity Recognition (HAR) has recently witnessed advancements with Transformer-based models. Especially, ActionFormer shows us a new perspectives for HAR in the sense that this approach gives us additional outputs which detect the border of the activities as well as the activity labels. ActionFormer was originally proposed with its input as image/video. However, this was converted to with its input as sensor signals as well. We analyze this extensively in terms of deep learning architectures. Based on the report of high temporal dynamics which limits the model's ability to capture subtle changes effectively and of the interdependencies between the spatial and temporal features. We propose the modified ActionFormer which will decrease these defects for sensor signals. The key to our approach lies in accordance with the Sequence-and-Excitation strategy to minimize the increase in additional parameters and opt for the swish activation function to retain the information about direction in the negative range. Experiments on the WEAR dataset show that our method achieves substantial improvement of a 16.01\% in terms of average mAP for inertial data.
Similar Papers
A Real-Time Human Action Recognition Model for Assisted Living
CV and Pattern Recognition
Spots elderly falls and pain using cameras.
SETransformer: A Hybrid Attention-Based Architecture for Robust Human Activity Recognition
Machine Learning (CS)
Helps computers understand what you're doing from movement.
MoPFormer: Motion-Primitive Transformer for Wearable-Sensor Activity Recognition
CV and Pattern Recognition
Helps computers understand body movements better.