Evaluating LLM Behavior in Hiring: Implicit Weights, Fairness Across Groups, and Alignment with Human Preferences
By: Morgane Hoffmann , Emma Jouffroy , Warren Jouanneau and more
Potential Business Impact:
Helps hiring AI understand job skills better.
General-purpose Large Language Models (LLMs) show significant potential in recruitment applications, where decisions require reasoning over unstructured text, balancing multiple criteria, and inferring fit and competence from indirect productivity signals. Yet, it is still uncertain how LLMs assign importance to each attribute and whether such assignments are in line with economic principles, recruiter preferences or broader societal norms. We propose a framework to evaluate an LLM's decision logic in recruitment, by drawing on established economic methodologies for analyzing human hiring behavior. We build synthetic datasets from real freelancer profiles and project descriptions from a major European online freelance marketplace and apply a full factorial design to estimate how a LLM weighs different match-relevant criteria when evaluating freelancer-project fit. We identify which attributes the LLM prioritizes and analyze how these weights vary across project contexts and demographic subgroups. Finally, we explain how a comparable experimental setup could be implemented with human recruiters to assess alignment between model and human decisions. Our findings reveal that the LLM weighs core productivity signals, such as skills and experience, but interprets certain features beyond their explicit matching value. While showing minimal average discrimination against minority groups, intersectional effects reveal that productivity signals carry different weights between demographic groups.
Similar Papers
Distributive Fairness in Large Language Models: Evaluating Alignment with Human Values
CS and Game Theory
Computers learn to share fairly like people.
Small Changes, Large Consequences: Analyzing the Allocational Fairness of LLMs in Hiring Contexts
Computation and Language
AI hiring tools unfairly favor some people.
Evaluating Bias in LLMs for Job-Resume Matching: Gender, Race, and Education
Computation and Language
AI hiring tools still favor certain schools.