Everything is Plausible: Investigating the Impact of LLM Rationales on Human Notions of Plausibility
By: Shramay Palta , Peter Rankel , Sarah Wiegreffe and more
Potential Business Impact:
AI can change how people think about things.
We investigate the degree to which human plausibility judgments of multiple-choice commonsense benchmark answers are subject to influence by (im)plausibility arguments for or against an answer, in particular, using rationales generated by LLMs. We collect 3,000 plausibility judgments from humans and another 13,600 judgments from LLMs. Overall, we observe increases and decreases in mean human plausibility ratings in the presence of LLM-generated PRO and CON rationales, respectively, suggesting that, on the whole, human judges find these rationales convincing. Experiments with LLMs reveal similar patterns of influence. Our findings demonstrate a novel use of LLMs for studying aspects of human cognition, while also raising practical concerns that, even in domains where humans are ``experts'' (i.e., common sense), LLMs have the potential to exert considerable influence on people's beliefs.
Similar Papers
Rethinking Human Preference Evaluation of LLM Rationales
Artificial Intelligence
Explains why AI answers are good or bad.
Position: On the Methodological Pitfalls of Evaluating Base LLMs for Reasoning
Computation and Language
Makes computers guess answers, not truly think.
Do LLMs Give Psychometrically Plausible Responses in Educational Assessments?
Computation and Language
Computers can't yet help make tests better.