Score: 2

A Theory of the Mechanics of Information: Generalization Through Measurement of Uncertainty (Learning is Measuring)

Published: October 26, 2025 | arXiv ID: 2510.22809v1

By: Christopher J. Hazard , Michael Resnick , Jacob Beel and more

Potential Business Impact:

Makes computers learn from messy data easily.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Traditional machine learning relies on explicit models and domain assumptions, limiting flexibility and interpretability. We introduce a model-free framework using surprisal (information theoretic uncertainty) to directly analyze and perform inferences from raw data, eliminating distribution modeling, reducing bias, and enabling efficient updates including direct edits and deletion of training data. By quantifying relevance through uncertainty, the approach enables generalizable inference across tasks including generative inference, causal discovery, anomaly detection, and time series forecasting. It emphasizes traceability, interpretability, and data-driven decision making, offering a unified, human-understandable framework for machine learning, and achieves at or near state-of-the-art performance across most common machine learning tasks. The mathematical foundations create a ``physics'' of information, which enable these techniques to apply effectively to a wide variety of complex data types, including missing data. Empirical results indicate that this may be a viable alternative path to neural networks with regard to scalable machine learning and artificial intelligence that can maintain human understandability of the underlying mechanics.


Page Count
117 pages

Category
Computer Science:
Machine Learning (CS)