Score: 2

Dynamic Features Adaptation in Networking: Toward Flexible training and Explainable inference

Published: October 9, 2025 | arXiv ID: 2510.08303v1

By: Yannis Belkhiter , Seshu Tirupathi , Giulio Zizzo and more

BigTech Affiliations: IBM

Potential Business Impact:

AI learns new network tricks faster and explains them.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

As AI becomes a native component of 6G network control, AI models must adapt to continuously changing conditions, including the introduction of new features and measurements driven by multi-vendor deployments, hardware upgrades, and evolving service requirements. To address this growing need for flexible learning in non-stationary environments, this vision paper highlights Adaptive Random Forests (ARFs) as a reliable solution for dynamic feature adaptation in communication network scenarios. We show that iterative training of ARFs can effectively lead to stable predictions, with accuracy improving over time as more features are added. In addition, we highlight the importance of explainability in AI-driven networks, proposing Drift-Aware Feature Importance (DAFI) as an efficient XAI feature importance (FI) method. DAFI uses a distributional drift detector to signal when to apply computationally intensive FI methods instead of lighter alternatives. Our tests on 3 different datasets indicate that our approach reduces runtime by up to 2 times, while producing more consistent feature importance values. Together, ARFs and DAFI provide a promising framework to build flexible AI methods adapted to 6G network use-cases.

Country of Origin
🇮🇪 🇺🇸 United States, Ireland

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)