Counterfactual Basis Extension and Representational Geometry: An MDL-Constrained Model of Conceptual Growth
By: Chainarong Amornbunchornvej
Concept learning becomes possible only when existing representations fail to account for experience. Most models of learning and inference, however, presuppose a fixed representational basis within which belief updating occurs. In this paper, I address a prior question: under what structural conditions can the representational basis itself expand in a principled and selective way? I propose a geometric framework in which conceptual growth is modeled as admissible basis extension evaluated under a Minimum Description Length (MDL) criterion. Experience, whether externally observed or internally simulated, is represented as vectors relative to a current conceptual subspace. Residual components capture systematic representational failure, and candidate conceptual extensions are restricted to low-rank, admissible transformations. I show that any MDL-accepted extension can be chosen so that its novel directions lie entirely within the residual span induced by experience, while extensions orthogonal to this span strictly increase description length and are therefore rejected. This yields a conservative account of imagination and conceptual innovation. Internally generated counterfactual representations contribute to learning only insofar as they expose or amplify structured residual error, and cannot introduce arbitrary novelty. I further distinguish representational counterfactuals--counterfactuals over an agent's conceptual basis--from causal or value-level counterfactuals, and show how MDL provides a normative selection principle governing representational change. Overall, the framework characterizes conceptual development as an error-driven, geometry-constrained process of basis extension, clarifying both the role and the limits of imagination in learning and theory change.
Similar Papers
Interpretation as Linear Transformation: A Cognitive-Geometric Model of Belief and Meaning
Artificial Intelligence
Helps ideas spread or disappear between different minds.
The Geometry of Reasoning: Flowing Logics in Representation Space
Artificial Intelligence
Shows how computers "think" through math.
Physics Steering: Causal Control of Cross-Domain Concepts in a Physics Foundation Model
Machine Learning (CS)
Controls physics simulations by changing AI's thoughts.