Exploring the generalization of LLM truth directions on conversational formats
By: Timour Ichmoukhamedov, David Martens
Potential Business Impact:
Helps computers spot lies, even in long talks.
Several recent works argue that LLMs have a universal truth direction where true and false statements are linearly separable in the activation space of the model. It has been demonstrated that linear probes trained on a single hidden state of the model already generalize across a range of topics and might even be used for lie detection in LLM conversations. In this work we explore how this truth direction generalizes between various conversational formats. We find good generalization between short conversations that end on a lie, but poor generalization to longer formats where the lie appears earlier in the input prompt. We propose a solution that significantly improves this type of generalization by adding a fixed key phrase at the end of each conversation. Our results highlight the challenges towards reliable LLM lie detectors that generalize to new settings.
Similar Papers
Probing the Geometry of Truth: Consistency and Generalization of Truth Directions in LLMs Across Logical Transformations and Question Answering Tasks
Computation and Language
Makes computers tell the truth more often.
From Directions to Cones: Exploring Multidimensional Representations of Propositional Facts in LLMs
Machine Learning (CS)
Makes AI tell the truth more often.
The Geometries of Truth Are Orthogonal Across Tasks
Machine Learning (CS)
Makes AI answers more trustworthy by checking its thinking.