Interpretive Efficiency: Information-Geometric Foundations of Data Usefulness
By: Ronald Katende
Potential Business Impact:
Measures how well AI understands what it sees.
Interpretability is central to trustworthy machine learning, yet existing metrics rarely quantify how effectively data support an interpretive representation. We propose Interpretive Efficiency, a normalized, task-aware functional that measures the fraction of task-relevant information transmitted through an interpretive channel. The definition is grounded in five axioms ensuring boundedness, Blackwell-style monotonicity, data-processing stability, admissible invariance, and asymptotic consistency. We relate the functional to mutual information and derive a local Fisher-geometric expansion, then establish asymptotic and finite-sample estimation guarantees using standard empirical-process tools. Experiments on controlled image and signal tasks demonstrate that the measure recovers theoretical orderings, exposes representational redundancy masked by accuracy, and correlates with robustness, making it a practical, theory-backed diagnostic for representation design.
Similar Papers
Toward Faithfulness-guided Ensemble Interpretation of Neural Network
Machine Learning (CS)
Shows how computer brains make decisions clearly.
Foundations of Interpretable Models
Machine Learning (CS)
Makes AI easier to understand and build.
Interpretability as Alignment: Making Internal Understanding a Design Principle
Machine Learning (CS)
Makes AI understandable and safe for people.