Explainable AI: Learning from the Learners
By: Ricardo Vinuesa, Steven L. Brunton, Gianmarco Mengaldo
Potential Business Impact:
AI learns how it learns, helping us discover more.
Artificial intelligence now outperforms humans in several scientific and engineering tasks, yet its internal representations often remain opaque. In this Perspective, we argue that explainable artificial intelligence (XAI), combined with causal reasoning, enables {\it learning from the learners}. Focusing on discovery, optimization and certification, we show how the combination of foundation models and explainability methods allows the extraction of causal mechanisms, guides robust design and control, and supports trust and accountability in high-stakes applications. We discuss challenges in faithfulness, generalization and usability of explanations, and propose XAI as a unifying framework for human-AI collaboration in science and engineering.
Similar Papers
Onto-Epistemological Analysis of AI Explanations
Artificial Intelligence
Makes AI decisions understandable and trustworthy.
Explainable artificial intelligence (XAI): from inherent explainability to large language models
Machine Learning (CS)
Lets people understand why computers make choices.
From Explainable to Explanatory Artificial Intelligence: Toward a New Paradigm for Human-Centered Explanations through Generative AI
Artificial Intelligence
AI explains decisions like a helpful friend.