Unveiling Statistical Significance of Online Regression over Multiple Datasets
By: Mohammad Abu-Shaira, Weishi Shi
Despite extensive focus on techniques for evaluating the performance of two learning algorithms on a single dataset, the critical challenge of developing statistical tests to compare multiple algorithms across various datasets has been largely overlooked in most machine learning research. Additionally, in the realm of Online Learning, ensuring statistical significance is essential to validate continuous learning processes, particularly for achieving rapid convergence and effectively managing concept drifts in a timely manner. Robust statistical methods are needed to assess the significance of performance differences as data evolves over time. This article examines the state-of-the-art online regression models and empirically evaluates several suitable tests. To compare multiple online regression models across various datasets, we employed the Friedman test along with corresponding post-hoc tests. For thorough evaluations, utilizing both real and synthetic datasets with 5-fold cross-validation and seed averaging ensures comprehensive assessment across various data subsets. Our tests generally confirmed the performance of competitive baselines as consistent with their individual reports. However, some statistical test results also indicate that there is still room for improvement in certain aspects of state-of-the-art methods.
Similar Papers
Machine-Learning-Assisted Comparison of Regression Functions
Methodology
Compares data patterns even with many details.
Comparing Optimization Algorithms Through the Lens of Search Behavior Analysis
Neural and Evolutionary Computing
Finds best computer problem-solving methods.
Dimension Agnostic Testing of Survey Data Credibility through the Lens of Regression
Machine Learning (CS)
Checks if survey data truly reflects people.