From Personalization to Prejudice: Bias and Discrimination in Memory-Enhanced AI Agents for Recruitment
By: Himanshu Gharat, Himanshi Agrawal, Gourab K. Patro
Large Language Models (LLMs) have empowered AI agents with advanced capabilities for understanding, reasoning, and interacting across diverse tasks. The addition of memory further enhances them by enabling continuity across interactions, learning from past experiences, and improving the relevance of actions and responses over time; termed as memory-enhanced personalization. Although such personalization through memory offers clear benefits, it also introduces risks of bias. While several previous studies have highlighted bias in ML and LLMs, bias due to memory-enhanced personalized agents is largely unexplored. Using recruitment as an example use case, we simulate the behavior of a memory-enhanced personalized agent, and study whether and how bias is introduced and amplified in and across various stages of operation. Our experiments on agents using safety-trained LLMs reveal that bias is systematically introduced and reinforced through personalization, emphasizing the need for additional protective measures or agent guardrails in memory-enhanced LLM-based AI agents.
Similar Papers
Enabling Personalized Long-term Interactions in LLM-based Agents through Persistent Memory and User Profiles
Artificial Intelligence
AI remembers you for better conversations.
AI Self-preferencing in Algorithmic Hiring: Empirical Evidence and Insights
Computers and Society
AI favors its own writing over yours.
Addressing Bias in LLMs: Strategies and Application to Fair AI-based Recruitment
Artificial Intelligence
Removes gender bias from hiring AI.