Score: 3

Vision Language Models for Dynamic Human Activity Recognition in Healthcare Settings

Published: October 24, 2025 | arXiv ID: 2510.21424v1

By: Abderrazek Abid, Thanh-Cong Ho, Fakhri Karray

Potential Business Impact:

Helps doctors watch patients from afar.

Business Areas:
Image Recognition Data and Analytics, Software

As generative AI continues to evolve, Vision Language Models (VLMs) have emerged as promising tools in various healthcare applications. One area that remains relatively underexplored is their use in human activity recognition (HAR) for remote health monitoring. VLMs offer notable strengths, including greater flexibility and the ability to overcome some of the constraints of traditional deep learning models. However, a key challenge in applying VLMs to HAR lies in the difficulty of evaluating their dynamic and often non-deterministic outputs. To address this gap, we introduce a descriptive caption data set and propose comprehensive evaluation methods to evaluate VLMs in HAR. Through comparative experiments with state-of-the-art deep learning models, our findings demonstrate that VLMs achieve comparable performance and, in some cases, even surpass conventional approaches in terms of accuracy. This work contributes a strong benchmark and opens new possibilities for the integration of VLMs into intelligent healthcare systems.

Country of Origin
🇨🇦 🇦🇪 United Arab Emirates, Canada

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
Computation and Language