A Multi-Component Reward Function with Policy Gradient for Automated Feature Selection with Dynamic Regularization and Bias Mitigation
By: Sudip Khadka, L. S. Paudel
Potential Business Impact:
Makes AI fair by choosing the right information.
Static feature exclusion strategies often fail to prevent bias when hidden dependencies influence the model predictions. To address this issue, we explore a reinforcement learning (RL) framework that integrates bias mitigation and automated feature selection within a single learning process. Unlike traditional heuristic-driven filter or wrapper approaches, our RL agent adaptively selects features using a reward signal that explicitly integrates predictive performance with fairness considerations. This dynamic formulation allows the model to balance generalization, accuracy, and equity throughout the training process, rather than rely exclusively on pre-processing adjustments or post hoc correction mechanisms. In this paper, we describe the construction of a multi-component reward function, the specification of the agents action space over feature subsets, and the integration of this system with ensemble learning. We aim to provide a flexible and generalizable way to select features in environments where predictors are correlated and biases can inadvertently re-emerge.
Similar Papers
Automation and Feature Selection Enhancement with Reinforcement Learning (RL)
Machine Learning (CS)
Teaches computers to pick the best information faster.
Heterogeneous Multi-Agent Reinforcement Learning with Attention for Cooperative and Scalable Feature Transformation
Machine Learning (CS)
Makes computers better at finding patterns in data.
A Mathematical Framework for Custom Reward Functions in Job Application Evaluation using Reinforcement Learning
Machine Learning (CS)
Helps hiring software find better job candidates.