Score: 2

REFER: Mitigating Bias in Opinion Summarisation via Frequency Framed Prompting

Published: September 19, 2025 | arXiv ID: 2509.15723v1

By: Nannan Huang, Haytham M. Fayek, Xiuzhen Zhang

Potential Business Impact:

Summaries show all opinions fairly.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Individuals express diverse opinions, a fair summary should represent these viewpoints comprehensively. Previous research on fairness in opinion summarisation using large language models (LLMs) relied on hyperparameter tuning or providing ground truth distributional information in prompts. However, these methods face practical limitations: end-users rarely modify default model parameters, and accurate distributional information is often unavailable. Building upon cognitive science research demonstrating that frequency-based representations reduce systematic biases in human statistical reasoning by making reference classes explicit and reducing cognitive load, this study investigates whether frequency framed prompting (REFER) can similarly enhance fairness in LLM opinion summarisation. Through systematic experimentation with different prompting frameworks, we adapted techniques known to improve human reasoning to elicit more effective information processing in language models compared to abstract probabilistic representations.Our results demonstrate that REFER enhances fairness in language models when summarising opinions. This effect is particularly pronounced in larger language models and using stronger reasoning instructions.

Country of Origin
🇦🇺 Australia


Page Count
20 pages

Category
Computer Science:
Computation and Language