From Passive to Persuasive: Steering Emotional Nuance in Human-AI Negotiation
By: Niranjan Chebrolu, Gerard Christopher Yeo, Kokil Jaidka
Potential Business Impact:
Makes AI sound more happy and personal.
Large Language Models (LLMs) demonstrate increasing conversational fluency, yet instilling them with nuanced, human-like emotional expression remains a significant challenge. Current alignment techniques often address surface-level output or require extensive fine-tuning. This paper demonstrates that targeted activation engineering can steer LLaMA 3.1-8B to exhibit more human-like emotional nuances. We first employ attribution patching to identify causally influential components, to find a key intervention locus by observing activation patterns during diagnostic conversational tasks. We then derive emotional expression vectors from the difference in the activations generated by contrastive text pairs (positive vs. negative examples of target emotions). Applying these vectors to new conversational prompts significantly enhances emotional characteristics: steered responses show increased positive sentiment (e.g., joy, trust) and more frequent first-person pronoun usage, indicative of greater personal engagement. Our findings offer a precise and interpretable framework and new directions for the study of conversational AI.
Similar Papers
AI shares emotion with humans across languages and cultures
Computation and Language
AI understands and shows feelings like people.
Steerable Chatbots: Personalizing LLMs with Preference-Based Activation Steering
Human-Computer Interaction
Lets AI understand your hidden feelings better.
Ensembling Large Language Models to Characterize Affective Dynamics in Student-AI Tutor Dialogues
Computation and Language
Helps AI tutors understand student feelings to teach better.