Randomized Dimensionality Reduction for Euclidean Maximization and Diversity Measures
By: Jie Gao , Rajesh Jayaram , Benedikt Kolbe and more
Potential Business Impact:
Makes computer problems solve faster by shrinking data.
Randomized dimensionality reduction is a widely-used algorithmic technique for speeding up large-scale Euclidean optimization problems. In this paper, we study dimension reduction for a variety of maximization problems, including max-matching, max-spanning tree, max TSP, as well as various measures for dataset diversity. For these problems, we show that the effect of dimension reduction is intimately tied to the \emph{doubling dimension} $\lambda_X$ of the underlying dataset $X$ -- a quantity measuring intrinsic dimensionality of point sets. Specifically, we prove that a target dimension of $O(\lambda_X)$ suffices to approximately preserve the value of any near-optimal solution,which we also show is necessary for some of these problems. This is in contrast to classical dimension reduction results, whose dependence increases with the dataset size $|X|$. We also provide empirical results validating the quality of solutions found in the projected space, as well as speedups due to dimensionality reduction.
Similar Papers
A general framework for adaptive nonparametric dimensionality reduction
Machine Learning (Stat)
Finds best way to show complex data simply.
Fair Diversity Maximization with Few Representatives
Data Structures and Algorithms
Picks fair, varied items, representing all groups.
A Novel Approach for Intrinsic Dimension Estimation
Machine Learning (CS)
Makes big data easier for computers to understand.