Score: 0

From Facts to Foils: Designing and Evaluating Counterfactual Explanations for Smart Environments

Published: October 3, 2025 | arXiv ID: 2510.03078v1

By: Anna Trapp, Mersedeh Sadeghi, Andreas Vogelsang

Potential Business Impact:

Helps smart homes explain why things happened.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

Explainability is increasingly seen as an essential feature of rule-based smart environments. While counterfactual explanations, which describe what could have been done differently to achieve a desired outcome, are a powerful tool in eXplainable AI (XAI), no established methods exist for generating them in these rule-based domains. In this paper, we present the first formalization and implementation of counterfactual explanations tailored to this domain. It is implemented as a plugin that extends an existing explanation engine for smart environments. We conducted a user study (N=17) to evaluate our generated counterfactuals against traditional causal explanations. The results show that user preference is highly contextual: causal explanations are favored for their linguistic simplicity and in time-pressured situations, while counterfactuals are preferred for their actionable content, particularly when a user wants to resolve a problem. Our work contributes a practical framework for a new type of explanation in smart environments and provides empirical evidence to guide the choice of when each explanation type is most effective.

Country of Origin
🇩🇪 Germany

Page Count
8 pages

Category
Computer Science:
Artificial Intelligence