Score: 1

Knowledge Distillation for LLM-Based Human Activity Recognition in Homes

Published: January 12, 2026 | arXiv ID: 2601.07469v1

By: Julien Cumin, Oussama Er-Rahmany, Xi Chen

Potential Business Impact:

Smart homes understand what you're doing.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Human Activity Recognition (HAR) is a central problem for context-aware applications, especially for smart homes and assisted living. A few very recent studies have shown that Large Language Models (LLMs) can be used for HAR at home, reaching high performance and addressing key challenges. In this paper, we provide new experimental results regarding the use of LLMs for HAR, on two state-of-the-art datasets. More specifically, we show how recognition performance evolves depending on the size of the LLM used. Moreover, we experiment on the use of knowledge distillation techniques to fine-tune smaller LLMs with HAR reasoning examples generated by larger LLMs. We show that such fine-tuned models can perform almost as well as the largest LLMs, while having 50 times less parameters.

Page Count
9 pages

Category
Computer Science:
Artificial Intelligence