Explainability-in-Action: Enabling Expressive Manipulation and Tacit Understanding by Bending Diffusion Models in ComfyUI
By: Ahmed M. Abuzuraiq, Philippe Pasquier
Potential Business Impact:
Artists can change AI art by tweaking its insides.
Explainable AI (XAI) in creative contexts can go beyond transparency to support artistic engagement, modifiability, and sustained practice. While curated datasets and training human-scale models can offer artists greater agency and control, large-scale generative models like text-to-image diffusion systems often obscure these possibilities. We suggest that even large models can be treated as creative materials if their internal structure is exposed and manipulable. We propose a craft-based approach to explainability rooted in long-term, hands-on engagement akin to Sch\"on's "reflection-in-action" and demonstrate its application through a model-bending and inspection plugin integrated into the node-based interface of ComfyUI. We demonstrate that by interactively manipulating different parts of a generative model, artists can develop an intuition about how each component influences the output.
Similar Papers
Transparent Adaptive Learning via Data-Centric Multimodal Explainable AI
Artificial Intelligence
Helps computers explain their answers like a teacher.
From Explainable to Explanatory Artificial Intelligence: Toward a New Paradigm for Human-Centered Explanations through Generative AI
Artificial Intelligence
AI explains decisions like a helpful friend.
MATCH: Engineering Transparent and Controllable Conversational XAI Systems through Composable Building Blocks
Human-Computer Interaction
Makes AI systems easier to understand and control.