LLM-Guided Exemplar Selection for Few-Shot Wearable-Sensor Human Activity Recognition
By: Elsen Ronando, Sozo Inoue
Potential Business Impact:
Helps computers learn from fewer examples.
In this paper, we propose an LLM-Guided Exemplar Selection framework to address a key limitation in state-of-the-art Human Activity Recognition (HAR) methods: their reliance on large labeled datasets and purely geometric exemplar selection, which often fail to distinguish similar weara-ble sensor activities such as walking, walking upstairs, and walking downstairs. Our method incorporates semantic reasoning via an LLM-generated knowledge prior that captures feature importance, inter-class confusability, and exemplar budget multipliers, and uses it to guide exemplar scoring and selection. These priors are combined with margin-based validation cues, PageRank centrality, hubness penalization, and facility-location optimization to obtain a compact and informative set of exemplars. Evaluated on the UCI-HAR dataset under strict few-shot conditions, the framework achieves a macro F1-score of 88.78%, outperforming classical approaches such as random sampling, herding, and $k$-center. The results show that LLM-derived semantic priors, when integrated with structural and geometric cues, provide a stronger foundation for selecting representative sensor exemplars in few-shot wearable-sensor HAR.
Similar Papers
Few-shot Vision-based Human Activity Recognition with MLLM-based Visual Reinforcement Learning
Robotics
Teaches computers to recognize actions from few pictures.
Reducing Label Dependency in Human Activity Recognition with Wearables: From Supervised Learning to Novel Weakly Self-Supervised Approaches
Machine Learning (CS)
Lets smartwatches learn your activities with less data.
Bridging Generalization and Personalization in Human Activity Recognition via On-Device Few-Shot Learning
Machine Learning (CS)
Helps smartwatches learn your moves quickly.