Fairness Is Not Enough: Auditing Competence and Intersectional Bias in AI-powered Resume Screening
By: Kevin T Webster
Potential Business Impact:
AI hiring tools can be unfair or bad at jobs.
The increasing use of generative AI for resume screening is predicated on the assumption that it offers an unbiased alternative to biased human decision-making. However, this belief fails to address a critical question: are these AI systems fundamentally competent at the evaluative tasks they are meant to perform? This study investigates the question of competence through a two-part audit of eight major AI platforms. Experiment 1 confirmed complex, contextual racial and gender biases, with some models penalizing candidates merely for the presence of demographic signals. Experiment 2, which evaluated core competence, provided a critical insight: some models that appeared unbiased were, in fact, incapable of performing a substantive evaluation, relying instead on superficial keyword matching. This paper introduces the "Illusion of Neutrality" to describe this phenomenon, where an apparent lack of bias is merely a symptom of a model's inability to make meaningful judgments. This study recommends that organizations and regulators adopt a dual-validation framework, auditing AI hiring tools for both demographic bias and demonstrable competence to ensure they are both equitable and effective.
Similar Papers
FAIRE: Assessing Racial and Gender Bias in AI-Driven Resume Evaluations
Computation and Language
Tests AI hiring tools for race and gender bias.
The Illusion of Fairness: Auditing Fairness Interventions with Audit Studies
Artificial Intelligence
Makes hiring AI fairer by checking real-world bias.
No Thoughts Just AI: Biased LLM Recommendations Limit Human Agency in Resume Screening
Computers and Society
AI bias makes people unfairly favor job candidates.