Cross-Model Semantics in Representation Learning
By: Saleh Nikooroo, Thomas Engel
Potential Business Impact:
Makes AI models share knowledge better.
The internal representations learned by deep networks are often sensitive to architecture-specific choices, raising questions about the stability, alignment, and transferability of learned structure across models. In this paper, we investigate how structural constraints--such as linear shaping operators and corrective paths--affect the compatibility of internal representations across different architectures. Building on the insights from prior studies on structured transformations and convergence, we develop a framework for measuring and analyzing representational alignment across networks with distinct but related architectural priors. Through a combination of theoretical insights, empirical probes, and controlled transfer experiments, we demonstrate that structural regularities induce representational geometry that is more stable under architectural variation. This suggests that certain forms of inductive bias not only support generalization within a model, but also improve the interoperability of learned features across models. We conclude with a discussion on the implications of representational transferability for model distillation, modular learning, and the principled design of robust learning systems.
Similar Papers
Understanding Learning Dynamics Through Structured Representations
Machine Learning (CS)
Makes AI learn faster and smarter with fewer mistakes.
Understanding Learning Dynamics Through Structured Representations
Machine Learning (CS)
Makes computer learning more stable and understandable.
Structured Transformations for Stable and Interpretable Neural Computation
Machine Learning (CS)
Makes computer learning more stable and understandable.