Score: 0

Quantum-Inspired Optimization Process for Data Imputation

Published: May 7, 2025 | arXiv ID: 2505.04841v2

By: Nishikanta Mohanty , Bikash K. Behera , Badshah Mukherjee and more

Potential Business Impact:

Fixes missing health data for better health predictions.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Data imputation is a critical step in data pre-processing, particularly for datasets with missing or unreliable values. This study introduces a novel quantum-inspired imputation framework evaluated on the UCI Diabetes dataset, which contains biologically implausible missing values across several clinical features. The method integrates Principal Component Analysis (PCA) with quantum-assisted rotations, optimized through gradient-free classical optimizers -COBYLA, Simulated Annealing, and Differential Evolution to reconstruct missing values while preserving statistical fidelity. Reconstructed values are constrained within +/-2 standard deviations of original feature distributions, avoiding unrealistic clustering around central tendencies. This approach achieves a substantial and statistically significant improvement, including an average reduction of over 85% in Wasserstein distance and Kolmogorov-Smirnov test p-values between 0.18 and 0.22, compared to p-values > 0.99 in classical methods such as Mean, KNN, and MICE. The method also eliminates zero-value artifacts and enhances the realism and variability of imputed data. By combining quantum-inspired transformations with a scalable classical framework, this methodology provides a robust solution for imputation tasks in domains such as healthcare and AI pipelines, where data quality and integrity are crucial.

Country of Origin
🇦🇺 Australia

Page Count
13 pages

Category
Physics:
Quantum Physics