Clustering Discourses: Racial Biases in Short Stories about Women Generated by Large Language Models
By: Gustavo Bonil , João Gondim , Marina dos Santos and more
Potential Business Impact:
Shows how AI can spread unfair ideas about women.
This study investigates how large language models, in particular LLaMA 3.2-3B, construct narratives about Black and white women in short stories generated in Portuguese. From 2100 texts, we applied computational methods to group semantically similar stories, allowing a selection for qualitative analysis. Three main discursive representations emerge: social overcoming, ancestral mythification and subjective self-realization. The analysis uncovers how grammatically coherent, seemingly neutral texts materialize a crystallized, colonially structured framing of the female body, reinforcing historical inequalities. The study proposes an integrated approach, that combines machine learning techniques with qualitative, manual discourse analysis.
Similar Papers
Yet another algorithmic bias: A Discursive Analysis of Large Language Models Reinforcing Dominant Discourses on Gender and Race
Computation and Language
Finds hidden unfairness in AI stories.
More Women, Same Stereotypes: Unpacking the Gender Bias Paradox in Large Language Models
Computation and Language
Finds computers show unfair gender stories.
Biased Tales: Cultural and Topic Bias in Generating Children's Stories
Computation and Language
AI stories show unfair gender and culture bias.