An Innovative Algorithm For Robust, Interactive, Piecewise-Linear Data Exploration
By: Stephen Wright, Colin Paterson
Potential Business Impact:
Finds hidden patterns in messy real-world data.
Many mathematical modelling tasks (such as in Economics and Finance) are informed by data that is "found" rather than being the result of carefully designed experiments. This often results in data series that are short, noisy, multidimensional and contaminated with outliers, regime shifts, and confounding, uninformative or co-linear variables. We present a generalization of the Theil-Sen algorithm to reflect modes (rather than the median) in the parameter space distribution (of partial fits to the data). This can provide a robust piecewise-linear fit to the data while also allowing for extensions to including elements of cluster analysis, regularization and cross-validation in a unified (distribution free) approach that can:- 1. Exploit piecewise linearity to reduce the need to pre-specify the form of the underlying data generating process. 2. Detect non-homogeneity (e.g. regime shifts, multiple data generating processes etc.) in the data using an innovative non-parametric (Hamming-Distance/Affinity-Matrix) cluster analysis technique. 3. Enable dimension reduction and resistance to the effects of multi-co-linearity by including LASSO regularization as an integral part of the algorithm. 4. Estimate measures of accuracy, such as standard errors, bias, and confidence intervals, without needing to rely on traditional distributional assumptions. Taken together these extensions to the traditional Theil-Sen algorithm simplify the traditional process of parameter fitting by providing a single-stage analysis controlled by a multidimensional search of Scale/Parsimony/Precision hyper-parameters. These are early days in this research and the main limitation in this approach is that it assumes that compute power is infinite and compute time is small enough to allow interactive use.
Similar Papers
Contributions to Robust and Efficient Methods for Analysis of High Dimensional Data
Statistics Theory
Finds important patterns in huge, messy data.
Simulation-Based Fitting of Intractable Models via Sequential Sampling and Local Smoothing
Methodology
Helps computers learn from complex, unknown models.
High-dimensional Longitudinal Inference via a De-sparsified Dantzig-Selector
Methodology
Helps scientists understand how genes affect traits.