Score: 1

An Elementary Proof of the Near Optimality of LogSumExp Smoothing

Published: December 11, 2025 | arXiv ID: 2512.10825v1

By: Thabo Samakhoana, Benjamin Grimmer

BigTech Affiliations: Johns Hopkins University

Potential Business Impact:

Makes computer math smoother and more accurate.

Business Areas:
A/B Testing Data and Analytics

We consider the design of smoothings of the (coordinate-wise) max function in $\mathbb{R}^d$ in the infinity norm. The LogSumExp function $f(x)=\ln(\sum^d_i\exp(x_i))$ provides a classical smoothing, differing from the max function in value by at most $\ln(d)$. We provide an elementary construction of a lower bound, establishing that every overestimating smoothing of the max function must differ by at least $\sim 0.8145\ln(d)$. Hence, LogSumExp is optimal up to constant factors. However, in small dimensions, we provide stronger, exactly optimal smoothings attaining our lower bound, showing that the entropy-based LogSumExp approach to smoothing is not exactly optimal.

Country of Origin
🇺🇸 United States

Page Count
10 pages

Category
Mathematics:
Statistics Theory