Challenges of Heterogeneity in Big Data: A Comparative Study of Classification in Large-Scale Structured and Unstructured Domains
By: González Trigueros Jesús Eduardo , Alonso Sánchez Alejandro , Muñoz Rivera Emilio and more
Potential Business Impact:
Finds best computer learning for different data.
This study analyzes the impact of heterogeneity ("Variety") in Big Data by comparing classification strategies across structured (Epsilon) and unstructured (Rest-Mex, IMDB) domains. A dual methodology was implemented: evolutionary and Bayesian hyperparameter optimization (Genetic Algorithms, Optuna) in Python for numerical data, and distributed processing in Apache Spark for massive textual corpora. The results reveal a "complexity paradox": in high-dimensional spaces, optimized linear models (SVM, Logistic Regression) outperformed deep architectures and Gradient Boosting. Conversely, in text-based domains, the constraints of distributed fine-tuning led to overfitting in complex models, whereas robust feature engineering -- specifically Transformer-based embeddings (ROBERTa) and Bayesian Target Encoding -- enabled simpler models to generalize effectively. This work provides a unified framework for algorithm selection based on data nature and infrastructure constraints.
Similar Papers
A Novel Data-Dependent Learning Paradigm for Large Hypothesis Classes
Machine Learning (CS)
Teaches computers to learn from more data.
A Case for Computing on Unstructured Data
Databases
Lets computers understand messy information like text and pictures.
Analytical Queries for Unstructured Data
Databases
Helps computers understand videos and text better.