Gender Bias in LLMs: Preliminary Evidence from Shared Parenting Scenario in Czech Family Law
By: Jakub Harasta , Matej Vasina , Martin Kornel and more
Potential Business Impact:
AI gives unfair legal advice based on gender.
Access to justice remains limited for many people, leading laypersons to increasingly rely on Large Language Models (LLMs) for legal self-help. Laypeople use these tools intuitively, which may lead them to form expectations based on incomplete, incorrect, or biased outputs. This study examines whether leading LLMs exhibit gender bias in their responses to a realistic family law scenario. We present an expert-designed divorce scenario grounded in Czech family law and evaluate four state-of-the-art LLMs GPT-5 nano, Claude Haiku 4.5, Gemini 2.5 Flash, and Llama 3.3 in a fully zero-shot interaction. We deploy two versions of the scenario, one with gendered names and one with neutral labels, to establish a baseline for comparison. We further introduce nine legally relevant factors that vary the factual circumstances of the case and test whether these variations influence the models' proposed shared-parenting ratios. Our preliminary results highlight differences across models and suggest gender-dependent patterns in the outcomes generated by some systems. The findings underscore both the risks associated with laypeople's reliance on LLMs for legal guidance and the need for more robust evaluation of model behavior in sensitive legal contexts. We present exploratory and descriptive evidence intended to identify systematic asymmetries rather than to establish causal effects.
Similar Papers
Benchmarking Educational LLMs with Analytics: A Case Study on Gender Bias in Feedback
Computation and Language
Finds unfairness in AI teacher feedback.
Who Gets Cited? Gender- and Majority-Bias in LLM-Driven Reference Selection
Digital Libraries
AI unfairly favors male authors when picking research.
An Empirical Investigation of Gender Stereotype Representation in Large Language Models: The Italian Case
Computation and Language
AI chatbots show gender bias in jobs.