Score: 0

Interpretive Efficiency: Information-Geometric Foundations of Data Usefulness

Published: December 6, 2025 | arXiv ID: 2512.06341v1

By: Ronald Katende

Potential Business Impact:

Measures how well AI understands what it sees.

Business Areas:
Data Mining Data and Analytics, Information Technology

Interpretability is central to trustworthy machine learning, yet existing metrics rarely quantify how effectively data support an interpretive representation. We propose Interpretive Efficiency, a normalized, task-aware functional that measures the fraction of task-relevant information transmitted through an interpretive channel. The definition is grounded in five axioms ensuring boundedness, Blackwell-style monotonicity, data-processing stability, admissible invariance, and asymptotic consistency. We relate the functional to mutual information and derive a local Fisher-geometric expansion, then establish asymptotic and finite-sample estimation guarantees using standard empirical-process tools. Experiments on controlled image and signal tasks demonstrate that the measure recovers theoretical orderings, exposes representational redundancy masked by accuracy, and correlates with robustness, making it a practical, theory-backed diagnostic for representation design.

Page Count
23 pages

Category
Computer Science:
Machine Learning (CS)