Inverting Non-Injective Functions with Twin Neural Network Regression
By: Sebastian J. Wetzel
Potential Business Impact:
Finds hidden settings for machines using math.
Non-injective functions are not invertible. However, non-injective functions can be restricted to sub-domains on which they are locally injective and surjective and thus invertible if the dimensionality between input and output spaces are the same. Further, even if the dimensionalities do not match it is often possible to choose a preferred solution from many possible solutions. Twin neural network regression is naturally capable of incorporating these properties to invert non-injective functions. Twin neural network regression is trained to predict adjustments to well known input variables $\mathbf{x}^{\text{anchor}}$ to obtain an estimate for an unknown $\mathbf{x}^{\text{new}}$ under a change of the target variable from $\mathbf{y}^{\text{anchor}}$ to $\mathbf{y}^{\text{new}}$. In combination with k-nearest neighbor search, I propose a deterministic framework that finds input parameters to a given target variable of non-injective functions. The method is demonstrated by inverting non-injective functions describing toy problems and robot arm control that are a) defined by data or b) known as mathematical formula.
Similar Papers
Learning Generalizable Neural Operators for Inverse Problems
Machine Learning (CS)
Solves hard math problems by learning patterns.
Neural operators for solving nonlinear inverse problems
Numerical Analysis
Teaches computers to solve hard math problems.
Optimality-Informed Neural Networks for Solving Parametric Optimization Problems
Optimization and Control
Teaches computers to solve hard problems faster.