Can LLMs Generate Behaviors for Embodied Virtual Agents Based on Personality Traits?
By: Bin Han , Deuksin Kwon , Spencer Lin and more
Potential Business Impact:
Makes computer characters act like real people.
This study proposes a framework that employs personality prompting with Large Language Models to generate verbal and nonverbal behaviors for virtual agents based on personality traits. Focusing on extraversion, we evaluated the system in two scenarios: negotiation and ice breaking, using both introverted and extroverted agents. In Experiment 1, we conducted agent to agent simulations and performed linguistic analysis and personality classification to assess whether the LLM generated language reflected the intended traits and whether the corresponding nonverbal behaviors varied by personality. In Experiment 2, we carried out a user study to evaluate whether these personality aligned behaviors were consistent with their intended traits and perceptible to human observers. Our results show that LLMs can generate verbal and nonverbal behaviors that align with personality traits, and that users are able to recognize these traits through the agents' behaviors. This work underscores the potential of LLMs in shaping personality aligned virtual agents.
Similar Papers
Large Language Model Agent Personality and Response Appropriateness: Evaluation by Human Linguistic Experts, LLM-as-Judge, and Natural Language Processing Model
Human-Computer Interaction
Tests how well AI acts like a person.
The Personality Illusion: Revealing Dissociation Between Self-Reports & Behavior in LLMs
Artificial Intelligence
Computers can act like people, but don't always behave that way.
The Personality Illusion: Revealing Dissociation Between Self-Reports & Behavior in LLMs
Artificial Intelligence
Computers can act like people, but don't always behave that way.