Score: 0

Prompting Large Language Models for Training-Free Non-Intrusive Load Monitoring

Published: May 9, 2025 | arXiv ID: 2505.06330v3

By: Junyu Xue , Xudong Wang , Xiaoling He and more

Potential Business Impact:

Lets smart meters show what uses power.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Non-intrusive load monitoring (NILM) aims to disaggregate total electricity consumption into individual appliance usage, thus enabling more effective energy management. While deep learning has advanced NILM, it remains limited by its dependence on labeled data, restricted generalization, and lack of explainability. This paper introduces the first prompt-based NILM framework that leverages large language models (LLMs) with in-context learning. We design and evaluate prompt strategies that integrate appliance features, contextual information, and representative time-series examples through extensive case studies. Extensive experiments on the REDD and UK-DALE datasets show that LLMs guided solely by prompts deliver only basic NILM capabilities, with performance that lags behind traditional deep-learning models in complex scenarios. However, the experiments also demonstrate strong generalization across different houses and even regions by simply adapting the injected appliance features. It also provides clear, human-readable explanations for the inferred appliance states. Our findings define the capability boundaries of using prompt-only LLMs for NILM tasks. Their strengths in generalization and explainability present a promising new direction for the field.

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)