Quantitative Verification of Fairness in Tree Ensembles
By: Zhenjiang Zhao, Takahisa Toda, Takashi Kitamura
This work focuses on quantitative verification of fairness in tree ensembles. Unlike traditional verification approaches that merely return a single counterexample when the fairness is violated, quantitative verification estimates the ratio of all counterexamples and characterizes the regions where they occur, which is important information for diagnosing and mitigating bias. To date, quantitative verification has been explored almost exclusively for deep neural networks (DNNs). Representative methods, such as DeepGemini and FairQuant, all build on the core idea of Counterexample-Guided Abstraction Refinement, a generic framework that could be adapted to other model classes. We extended the framework into a model-agnostic form, but discovered two limitations: (i) it can provide only lower bounds, and (ii) its performance scales poorly. Exploiting the discrete structure of tree ensembles, our work proposes an efficient quantification technique that delivers any-time upper and lower bounds. Experiments on five widely used datasets demonstrate its effectiveness and efficiency. When applied to fairness testing, our quantification method significantly outperforms state-of-the-art testing techniques.
Similar Papers
Interpretable Fair Clustering
Machine Learning (CS)
Makes computer groups fair and easy to understand.
Correct-By-Construction: Certified Individual Fairness through Neural Network Training
Machine Learning (CS)
Makes computer decisions fair for everyone.
Fairness-Aware Graph Representation Learning with Limited Demographic Information
Machine Learning (CS)
Makes AI fairer even with secret data.