Improving Zero-shot ADL Recognition with Large Language Models through Event-based Context and Confidence
By: Michele Fiori , Gabriele Civitarese , Marco Colussi and more
Unobtrusive sensor-based recognition of Activities of Daily Living (ADLs) in smart homes by processing data collected from IoT sensing devices supports applications such as healthcare, safety, and energy management. Recent zero-shot methods based on Large Language Models (LLMs) have the advantage of removing the reliance on labeled ADL sensor data. However, existing approaches rely on time-based segmentation, which is poorly aligned with the contextual reasoning capabilities of LLMs. Moreover, existing approaches lack methods for estimating prediction confidence. This paper proposes to improve zero-shot ADL recognition with event-based segmentation and a novel method for estimating prediction confidence. Our experimental evaluation shows that event-based segmentation consistently outperforms time-based LLM approaches on complex, realistic datasets and surpasses supervised data-driven methods, even with relatively small LLMs (e.g., Gemma 3 27B). The proposed confidence measure effectively distinguishes correct from incorrect predictions.
Similar Papers
Leveraging Large Language Models for Explainable Activity Recognition in Smart Homes: A Critical Evaluation
Computation and Language
Helps smart homes explain what you're doing.
Context-Aware Human Behavior Prediction Using Multimodal Large Language Models: Challenges and Insights
Robotics
Helps robots understand what people will do.
DailyLLM: Context-Aware Activity Log Generation Using Multi-Modal Sensors and LLMs
Artificial Intelligence
Makes phones understand your daily life better.