Score: 0

How many integrals should be evaluated at least in two-dimensional hyperinterpolation?

Published: October 15, 2025 | arXiv ID: 2510.13204v1

By: Maolin Che , Congpei An , Yimin Wei and more

Potential Business Impact:

Makes computers solve hard math problems faster.

Business Areas:
Big Data Data and Analytics

This paper introduces a novel approach to approximating continuous functions over high-dimensional hypercubes by integrating matrix CUR decomposition with hyperinterpolation techniques. Traditional Fourier-based hyperinterpolation methods suffer from the curse of dimensionality, as the number of coefficients grows exponentially with the dimension. To address this challenge, we propose two efficient strategies for constructing low-rank matrix CUR decompositions of the coefficient matrix, significantly reducing computational complexity while preserving accuracy. The first method employs structured index selection to form a compressed representation of the tensor, while the second utilizes adaptive sampling to further optimize storage and computation. Theoretical error bounds are derived for both approaches, ensuring rigorous control over approximation quality. Additionally, practical algorithms -- including randomized and adaptive decomposition techniques -- are developed to efficiently compute the CUR decomposition. Numerical experiments demonstrate the effectiveness of our methods in drastically reducing the number of required coefficients without compromising precision. Our results bridge matrix/tensor decomposition and function approximation, offering a scalable solution for high-dimensional problems. This work advances the field of numerical analysis by providing a computationally efficient framework for hyperinterpolation, with potential applications in scientific computing, machine learning, and data-driven modeling.

Country of Origin
🇨🇳 China

Page Count
37 pages

Category
Mathematics:
Numerical Analysis (Math)