Explainability for Embedding AI: Aspirations and Actuality
By: Thomas Weber
Potential Business Impact:
Helps coders build better AI programs.
With artificial intelligence (AI) embedded in many everyday software systems, effectively and reliably developing and maintaining AI systems becomes an essential skill for software developers. However, the complexity inherent to AI poses new challenges. Explainable AI (XAI) may allow developers to understand better the systems they build, which, in turn, can help with tasks like debugging. In this paper, we report insights from a series of surveys with software developers that highlight that there is indeed an increased need for explanatory tools to support developers in creating AI systems. However, the feedback also indicates that existing XAI systems still fall short of this aspiration. Thus, we see an unmet need to provide developers with adequate support mechanisms to cope with this complexity so they can embed AI into high-quality software in the future.
Similar Papers
Onto-Epistemological Analysis of AI Explanations
Artificial Intelligence
Makes AI decisions understandable and trustworthy.
Explainable Artificial Intelligence Techniques for Software Development Lifecycle: A Phase-specific Survey
Software Engineering
Makes smart computer programs show how they think.
Beware of "Explanations" of AI
Machine Learning (CS)
Makes AI explanations safer and more trustworthy.