From Universal Approximation Theorem to Tropical Geometry of Multi-Layer Perceptrons
By: Yi-Shan Chu, Yueh-Cheng Kuo
Potential Business Impact:
Teaches computers to draw shapes before learning.
We revisit the Universal Approximation Theorem(UAT) through the lens of the tropical geometry of neural networks and introduce a constructive, geometry-aware initialization for sigmoidal multi-layer perceptrons (MLPs). Tropical geometry shows that Rectified Linear Unit (ReLU) networks admit decision functions with a combinatorial structure often described as a tropical rational, namely a difference of tropical polynomials. Focusing on planar binary classification, we design purely sigmoidal MLPs that adhere to the finite-sum format of UAT: a finite linear combination of shifted and scaled sigmoids of affine functions. The resulting models yield decision boundaries that already align with prescribed shapes at initialization and can be refined by standard training if desired. This provides a practical bridge between the tropical perspective and smooth MLPs, enabling interpretable, shape-driven initialization without resorting to ReLU architectures. We focus on the construction and empirical demonstrations in two dimensions; theoretical analysis and higher-dimensional extensions are left for future work.
Similar Papers
Beyond Universal Approximation Theorems: Algorithmic Uniform Approximation by Neural Networks Trained with Noisy Data
Machine Learning (Stat)
Teaches computers to learn from messy information.
Formal Analysis of the Sigmoid Function and Formal Proof of the Universal Approximation Theorem
Logic in Computer Science
Proves computers can learn any task perfectly.
Floating-Point Neural Networks Are Provably Robust Universal Approximators
Machine Learning (CS)
Computers can learn any task, even with errors.