Conversations: Love Them, Hate Them, Steer Them
By: Niranjan Chebrolu, Gerard Christopher Yeo, Kokil Jaidka
Potential Business Impact:
Makes AI sound more happy and caring.
Large Language Models (LLMs) demonstrate increasing conversational fluency, yet instilling them with nuanced, human-like emotional expression remains a significant challenge. Current alignment techniques often address surface-level output or require extensive fine-tuning. This paper demonstrates that targeted activation engineering can steer LLaMA 3.1-8B to exhibit more human-like emotional nuances. We first employ attribution patching to identify causally influential components, to find a key intervention locus by observing activation patterns during diagnostic conversational tasks. We then derive emotional expression vectors from the difference in the activations generated by contrastive text pairs (positive vs. negative examples of target emotions). Applying these vectors to new conversational prompts significantly enhances emotional characteristics: steered responses show increased positive sentiment (e.g., joy, trust) and more frequent first-person pronoun usage, indicative of greater personal engagement. Our findings offer a precise and interpretable method for controlling specific emotional attributes in LLMs, contributing to developing more aligned and empathetic conversational AI.
Similar Papers
From Passive to Persuasive: Steering Emotional Nuance in Human-AI Negotiation
Computation and Language
Makes AI sound more happy and personal.
Large Language Models are Highly Aligned with Human Ratings of Emotional Stimuli
Artificial Intelligence
AI understands feelings like people do.
Enhancing Human-Like Responses in Large Language Models
Computation and Language
Makes AI understand and talk like people.