A Robust Algorithm for Non-IID Machine Learning Problems with Convergence Analysis
By: Qing Xu, Xiaohua Xuan
Potential Business Impact:
Solves hard math problems for smarter computers.
In this paper, we propose an improved numerical algorithm for solving minimax problems based on nonsmooth optimization, quadratic programming and iterative process. We also provide a rigorous proof of convergence for our algorithm under some mild assumptions, such as gradient continuity and boundedness. Such an algorithm can be widely applied in various fields such as robust optimization, imbalanced learning, etc.
Similar Papers
A novel numerical method tailored for unconstrained optimization problems
Optimization and Control
Solves hard math problems faster, even tricky ones.
Adaptive Algorithms with Sharp Convergence Rates for Stochastic Hierarchical Optimization
Machine Learning (CS)
Helps computers solve tough problems without knowing how hard they are.
Convergence of a class of gradient-free optimisation schemes when the objective function is noisy, irregular, or both
Computation
Improves computer learning from messy data.