Latent Space Topology Evolution in Multilayer Perceptrons
By: Eduardo Paluzo-Hidalgo
Potential Business Impact:
Shows how computer brains learn by tracking data.
This paper introduces a topological framework for interpreting the internal representations of Multilayer Perceptrons (MLPs). We construct a simplicial tower, a sequence of simplicial complexes connected by simplicial maps, that captures how data topology evolves across network layers. Our approach enables bi-persistence analysis: layer persistence tracks topological features within each layer across scales, while MLP persistence reveals how these features transform through the network. We prove stability theorems for our topological descriptors and establish that linear separability in latent spaces is related to disconnected components in the nerve complexes. To make our framework practical, we develop a combinatorial algorithm for computing MLP persistence and introduce trajectory-based visualisations that track data flow through the network. Experiments on synthetic and real-world medical data demonstrate our method's ability to identify redundant layers, reveal critical topological transitions, and provide interpretable insights into how MLPs progressively organise data for classification.
Similar Papers
Persistent Topological Structures and Cohomological Flows as a Mathematical Framework for Brain-Inspired Representation Learning
Machine Learning (CS)
Helps computers understand brain patterns better.
Topological Dictionary Learning
Signal Processing
Finds hidden patterns in connected data.
Graph signal aware decomposition of dynamic networks via latent graphs
Signal Processing
Finds hidden patterns in changing networks.