Structured Representation
By: Arun Kumar, Paul Schrater
Potential Business Impact:
Teaches computers to learn from relationships.
Invariant representations are core to representation learning, yet a central challenge remains: uncovering invariants that are stable and transferable without suppressing task-relevant signals. This raises fundamental questions, requiring further inquiry, about the appropriate level of abstraction at which such invariants should be defined, and which aspects of a system they should characterize. Interpretation of the environment relies on abstract knowledge structures to make sense of the current state, which leads to interactions, essential drivers of learning and knowledge acquisition. We posit that interpretation operates at the level of higher-order relational knowledge; hence, invariant structures must be where knowledge resides, specifically, as partitions defined by the closure of relational paths within an abstract knowledge space. These partitions serve as the core invariant representations, forming the structural substrate where knowledge is stored and learning occurs. On the other hand, inter-partition connectors enable the deployment of these knowledge partitions encoding task-relevant transitions. Thus, invariant partitions provide the foundational primitives of structured representation. We formalize the computational foundations for structured representation of the invariant partitions based on closed semiring, a relational algebraic structure.
Similar Papers
Cross-Model Semantics in Representation Learning
Machine Learning (CS)
Makes AI models share knowledge better.
Causal invariant geographic network representations with feature and structural distribution shifts
Machine Learning (CS)
Helps computers understand changing maps better.
Structure Transfer: an Inference-Based Calculus for the Transformation of Representations
Machine Learning (CS)
Changes how computers understand different information.