Vision Language Models for Dynamic Human Activity Recognition in Healthcare Settings
By: Abderrazek Abid, Thanh-Cong Ho, Fakhri Karray
Potential Business Impact:
Helps doctors watch patients from afar.
As generative AI continues to evolve, Vision Language Models (VLMs) have emerged as promising tools in various healthcare applications. One area that remains relatively underexplored is their use in human activity recognition (HAR) for remote health monitoring. VLMs offer notable strengths, including greater flexibility and the ability to overcome some of the constraints of traditional deep learning models. However, a key challenge in applying VLMs to HAR lies in the difficulty of evaluating their dynamic and often non-deterministic outputs. To address this gap, we introduce a descriptive caption data set and propose comprehensive evaluation methods to evaluate VLMs in HAR. Through comparative experiments with state-of-the-art deep learning models, our findings demonstrate that VLMs achieve comparable performance and, in some cases, even surpass conventional approaches in terms of accuracy. This work contributes a strong benchmark and opens new possibilities for the integration of VLMs into intelligent healthcare systems.
Similar Papers
Vision Language Models in Medicine
CV and Pattern Recognition
Helps doctors understand medical images and notes.
Large VLM-based Vision-Language-Action Models for Robotic Manipulation: A Survey
Robotics
Robots learn to do tasks by watching and listening.
Exploration of VLMs for Driver Monitoring Systems Applications
CV and Pattern Recognition
Helps cars watch drivers to prevent accidents.