STRIDE: Scalable and Interpretable XAI via Subset-Free Functional Decomposition
By: Chaeyun Ko
Potential Business Impact:
Explains AI decisions faster and more clearly.
Most explainable AI (XAI) frameworks face two practical limitations: the exponential cost of reasoning over feature subsets and the reduced expressiveness of summarizing effects as single scalar values. We present STRIDE, a scalable framework that aims to mitigate both issues by framing explanation as a subset-enumeration-free, orthogonal functional decomposition in a Reproducing Kernel Hilbert Space (RKHS). Rather than focusing only on scalar attributions, STRIDE computes functional components f_S(x_S) via an analytical projection scheme based on a recursive kernel-centering procedure, avoiding explicit subset enumeration. In the tabular setups we study, the approach is model-agnostic, provides both local and global views, and is supported by theoretical results on orthogonality and L^2 convergence under stated assumptions. On public tabular benchmarks in our environment, we observed speedups ranging from 0.6 times (slower than TreeSHAP on a small dataset) to 9.7 times (California), with a median approximate 3.0 times across 10 datasets, while maintaining high fidelity (R^2 between 0.81 and 0.999) and substantial rank agreement on most datasets. Overall, STRIDE complements scalar attribution methods by offering a structured functional perspective, enabling novel diagnostics like 'component surgery' to quantitatively measure the impact of specific interactions within our experimental scope.
Similar Papers
STRIDE: Subset-Free Functional Decomposition for XAI in Tabular Settings
Machine Learning (CS)
Shows how computer decisions work, not just why.
STRIDE: A Systematic Framework for Selecting AI Modalities -- Agentic AI, AI Assistants, or LLM Calls
Artificial Intelligence
Chooses the right AI for the job.
ASTRIDE: A Security Threat Modeling Platform for Agentic-AI Applications
Artificial Intelligence
Finds hidden dangers in smart computer programs.