Mind Reading or Misreading? LLMs on the Big Five Personality Test
By: Francesco Di Cursi , Chiara Boldrini , Marco Conti and more
Potential Business Impact:
Helps computers guess your personality from writing.
We evaluate large language models (LLMs) for automatic personality prediction from text under the binary Five Factor Model (BIG5). Five models -- including GPT-4 and lightweight open-source alternatives -- are tested across three heterogeneous datasets (Essays, MyPersonality, Pandora) and two prompting strategies (minimal vs. enriched with linguistic and psychological cues). Enriched prompts reduce invalid outputs and improve class balance, but also introduce a systematic bias toward predicting trait presence. Performance varies substantially: Openness and Agreeableness are relatively easier to detect, while Extraversion and Neuroticism remain challenging. Although open-source models sometimes approach GPT-4 and prior benchmarks, no configuration yields consistently reliable predictions in zero-shot binary settings. Moreover, aggregate metrics such as accuracy and macro-F1 mask significant asymmetries, with per-class recall offering clearer diagnostic value. These findings show that current out-of-the-box LLMs are not yet suitable for APPT, and that careful coordination of prompt design, trait framing, and evaluation metrics is essential for interpretable results.
Similar Papers
MindShift: Analyzing Language Models' Reactions to Psychological Prompts
Computation and Language
AI can now act like different people.
Evaluating LLM Alignment on Personality Inference from Real-World Interview Data
Computation and Language
Computers can't guess your personality from talking.
From Five Dimensions to Many: Large Language Models as Precise and Interpretable Psychological Profilers
Artificial Intelligence
Computers guess your personality from a few answers.