Score: 1

Are Stereotypes Leading LLMs' Zero-Shot Stance Detection ?

Published: October 23, 2025 | arXiv ID: 2510.20154v1

By: Anthony Dubreuil , Antoine Gourru , Christine Largeron and more

Potential Business Impact:

Helps computers judge opinions fairly.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models inherit stereotypes from their pretraining data, leading to biased behavior toward certain social groups in many Natural Language Processing tasks, such as hateful speech detection or sentiment analysis. Surprisingly, the evaluation of this kind of bias in stance detection methods has been largely overlooked by the community. Stance Detection involves labeling a statement as being against, in favor, or neutral towards a specific target and is among the most sensitive NLP tasks, as it often relates to political leanings. In this paper, we focus on the bias of Large Language Models when performing stance detection in a zero-shot setting. We automatically annotate posts in pre-existing stance detection datasets with two attributes: dialect or vernacular of a specific group and text complexity/readability, to investigate whether these attributes influence the model's stance detection decisions. Our results show that LLMs exhibit significant stereotypes in stance detection tasks, such as incorrectly associating pro-marijuana views with low text complexity and African American dialect with opposition to Donald Trump.

Country of Origin
🇫🇷 France

Repos / Data Links

Page Count
14 pages

Category
Computer Science:
Computation and Language