An Elementary Proof of the Near Optimality of LogSumExp Smoothing
By: Thabo Samakhoana, Benjamin Grimmer
Potential Business Impact:
Makes computer math smoother and more accurate.
We consider the design of smoothings of the (coordinate-wise) max function in $\mathbb{R}^d$ in the infinity norm. The LogSumExp function $f(x)=\ln(\sum^d_i\exp(x_i))$ provides a classical smoothing, differing from the max function in value by at most $\ln(d)$. We provide an elementary construction of a lower bound, establishing that every overestimating smoothing of the max function must differ by at least $\sim 0.8145\ln(d)$. Hence, LogSumExp is optimal up to constant factors. However, in small dimensions, we provide stronger, exactly optimal smoothings attaining our lower bound, showing that the entropy-based LogSumExp approach to smoothing is not exactly optimal.
Similar Papers
Rates of Convergence of Maximum Smoothed Log-Likelihood Estimators for Semi-Parametric Multivariate Mixtures
Statistics Theory
Makes smart guesses about mixed data more reliable.
Differentially Private Learning of Exponential Distributions: Adaptive Algorithms and Tight Bounds
Data Structures and Algorithms
Learns private data patterns without revealing secrets.
Approximating functions on ${\mathbb R}^+$ by exponential sums
Numerical Analysis
Makes math problems with curves easier to solve.