Dynamic Features Adaptation in Networking: Toward Flexible training and Explainable inference
By: Yannis Belkhiter , Seshu Tirupathi , Giulio Zizzo and more
Potential Business Impact:
AI learns new network tricks faster and explains them.
As AI becomes a native component of 6G network control, AI models must adapt to continuously changing conditions, including the introduction of new features and measurements driven by multi-vendor deployments, hardware upgrades, and evolving service requirements. To address this growing need for flexible learning in non-stationary environments, this vision paper highlights Adaptive Random Forests (ARFs) as a reliable solution for dynamic feature adaptation in communication network scenarios. We show that iterative training of ARFs can effectively lead to stable predictions, with accuracy improving over time as more features are added. In addition, we highlight the importance of explainability in AI-driven networks, proposing Drift-Aware Feature Importance (DAFI) as an efficient XAI feature importance (FI) method. DAFI uses a distributional drift detector to signal when to apply computationally intensive FI methods instead of lighter alternatives. Our tests on 3 different datasets indicate that our approach reduces runtime by up to 2 times, while producing more consistent feature importance values. Together, ARFs and DAFI provide a promising framework to build flexible AI methods adapted to 6G network use-cases.
Similar Papers
An explainable Recursive Feature Elimination to detect Advanced Persistent Threats using Random Forest classifier
Cryptography and Security
Finds hidden computer attacks with clear reasons.
Adaptive Forests For Classification
Machine Learning (CS)
Makes computer predictions smarter by changing how it learns.
Interpretable Network-assisted Random Forest+
Machine Learning (Stat)
Shows how computers learn from connected data.