Neural Network Convergence for Variational Inequalities
By: Yun Zhao, Harry Zheng
Potential Business Impact:
Helps computers solve tricky financial math problems.
We propose an approach to applying neural networks on linear parabolic variational inequalities. We use loss functions that directly incorporate the variational inequality on the whole domain to bypass the need to determine the stopping region in advance and prove the existence of neural networks whose losses converge to zero. We also prove the functional convergence in the Sobolev space. We then apply our approach to solving an optimal investment and stopping problem in finance. By leveraging duality, we convert the nonlinear HJB-type variational inequality of the primal problem into a linear variational inequality of the dual problem and prove the convergence of the primal value function from the dual neural network solution, an outcome made possible by our Sobolev norm analysis. We illustrate the versatility and accuracy of our method with numerical examples for both power and non-HARA utilities as well as high-dimensional American put option pricing. Our results underscore the potential of neural networks for solving variational inequalities in optimal stopping and control problems.
Similar Papers
Neural Network Convergence for Variational Inequalities
Mathematical Finance
Helps computers solve hard money math problems.
Neural operators for solving nonlinear inverse problems
Numerical Analysis
Teaches computers to solve hard math problems.
Deep Learning for Continuous-time Stochastic Control with Jumps
Machine Learning (CS)
Teaches computers to make smart choices in risky situations.