More Bias, Less Bias: BiasPrompting for Enhanced Multiple-Choice Question Answering
By: Duc Anh Vu , Thong Nguyen , Cong-Duy Nguyen and more
Potential Business Impact:
Helps AI better answer tricky questions by thinking.
With the advancement of large language models (LLMs), their performance on multiple-choice question (MCQ) tasks has improved significantly. However, existing approaches face key limitations: answer choices are typically presented to LLMs without contextual grounding or explanation. This absence of context can lead to incomplete exploration of all possible answers, ultimately degrading the models' reasoning capabilities. To address these challenges, we introduce BiasPrompting, a novel inference framework that guides LLMs to generate and critically evaluate reasoning across all plausible answer options before reaching a final prediction. It consists of two components: first, a reasoning generation stage, where the model is prompted to produce supportive reasonings for each answer option, and then, a reasoning-guided agreement stage, where the generated reasonings are synthesized to select the most plausible answer. Through comprehensive evaluations, BiasPrompting demonstrates significant improvements in five widely used multiple-choice question answering benchmarks. Our experiments showcase that BiasPrompting enhances the reasoning capabilities of LLMs and provides a strong foundation for tackling complex and challenging questions, particularly in settings where existing methods underperform.
Similar Papers
Adaptive Generation of Bias-Eliciting Questions for LLMs
Computers and Society
Finds unfairness in AI answers to real questions.
Promptception: How Sensitive Are Large Multimodal Models to Prompts?
CV and Pattern Recognition
Makes AI answer questions more fairly.
Rethinking Prompt-based Debiasing in Large Language Models
Computation and Language
AI might not really understand fairness.