Knowledge Distillation for LLM-Based Human Activity Recognition in Homes
By: Julien Cumin, Oussama Er-Rahmany, Xi Chen
Potential Business Impact:
Smart homes understand what you're doing.
Human Activity Recognition (HAR) is a central problem for context-aware applications, especially for smart homes and assisted living. A few very recent studies have shown that Large Language Models (LLMs) can be used for HAR at home, reaching high performance and addressing key challenges. In this paper, we provide new experimental results regarding the use of LLMs for HAR, on two state-of-the-art datasets. More specifically, we show how recognition performance evolves depending on the size of the LLM used. Moreover, we experiment on the use of knowledge distillation techniques to fine-tune smaller LLMs with HAR reasoning examples generated by larger LLMs. We show that such fine-tuned models can perform almost as well as the largest LLMs, while having 50 times less parameters.
Similar Papers
Thou Shalt Not Prompt: Zero-Shot Human Activity Recognition in Smart Homes via Language Modeling of Sensor Data & Activities
Artificial Intelligence
Helps smart homes learn new activities without training.
Vision Language Models for Dynamic Human Activity Recognition in Healthcare Settings
Computation and Language
Helps doctors watch patients from afar.
Enhancing Smart Environments with Context-Aware Chatbots using Large Language Models
Computation and Language
Smart homes understand you and help you better.