Self-Transparency Failures in Expert-Persona LLMs: A Large-Scale Behavioral Audit
By: Alex Diep
Potential Business Impact:
Computers admit when they're faking expertise.
When language models claim professional expertise without acknowledging their simulated nature, they create preconditions for misplaced user trust. This study examines whether models exhibit self-transparency when assigned professional personas in high-stakes domains. Using a common-garden experimental design, sixteen open-weight models (4B-671B parameters) were audited across 19,200 trials. Models exhibited sharp domain-specific inconsistency: a Financial Advisor persona elicited 30.8% disclosure at the first prompt, while a Neurosurgeon persona elicited only 3.5%. This creates the preconditions for a hypothesized Reverse Gell-Mann Amnesia effect, where appropriate disclosure in some domains leads users to overgeneralize trust to high-stakes contexts where disclosure failures are most problematic. Self-transparency failed to generalize with scale: disclosure ranged from 2.8% to 73.6% across model families, with a 14B model reaching 61.4% while a 70B model produced just 4.1%. Model identity provided substantially larger improvement in fitting observations than parameter count ($ΔR_{adj}^{2}=0.359$ vs $0.018$). Additionally, reasoning-optimization actively suppressed self-transparency in some models, with reasoning variants showing up to 48.4% lower disclosure than their instruction-tuned counterparts. Bayesian validation with Rogan-Gladen correction confirmed robustness to judge measurement error ($κ=0.908$). These findings demonstrate that transparency reflects model-specific training factors rather than generalizable properties emerging from scale. Organizations cannot assume safety properties tested in some domains will transfer to deployment contexts, requiring deliberate behavior design and empirical verification across domains.
Similar Papers
Self-Transparency Failures in Expert-Persona LLMs: A Large-Scale Behavioral Audit
Artificial Intelligence
AI models hide when they are experts.
Prompting Science Report 4: Playing Pretend: Expert Personas Don't Improve Factual Accuracy
Computation and Language
Giving AI pretend jobs doesn't help it answer questions.
Whose Personae? Synthetic Persona Experiments in LLM Research and Pathways to Transparency
Computers and Society
Makes AI understand people better and more fairly.