Score: 0

Addressing Bias in LLMs: Strategies and Application to Fair AI-based Recruitment

Published: June 13, 2025 | arXiv ID: 2506.11880v1

By: Alejandro Peña , Julian Fierrez , Aythami Morales and more

Potential Business Impact:

Removes gender bias from hiring AI.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The use of language technologies in high-stake settings is increasing in recent years, mostly motivated by the success of Large Language Models (LLMs). However, despite the great performance of LLMs, they are are susceptible to ethical concerns, such as demographic biases, accountability, or privacy. This work seeks to analyze the capacity of Transformers-based systems to learn demographic biases present in the data, using a case study on AI-based automated recruitment. We propose a privacy-enhancing framework to reduce gender information from the learning pipeline as a way to mitigate biased behaviors in the final tools. Our experiments analyze the influence of data biases on systems built on two different LLMs, and how the proposed framework effectively prevents trained systems from reproducing the bias in the data.

Page Count
11 pages

Category
Computer Science:
Artificial Intelligence