On the Role of Contextual Information and Ego States in LLM Agent Behavior for Transactional Analysis Dialogues
By: Monika Zamojska, Jarosław A. Chudziak
LLM-powered agents are now used in many areas, from customer support to education, and there is increasing interest in their ability to act more like humans. This includes fields such as social, political, and psychological research, where the goal is to model group dynamics and social behavior. However, current LLM agents often lack the psychological depth and consistency needed to capture the real patterns of human thinking. They usually provide direct or statistically likely answers, but they miss the deeper goals, emotional conflicts, and motivations that drive real human interactions. This paper proposes a Multi-Agent System (MAS) inspired by Transactional Analysis (TA) theory. In the proposed system, each agent is divided into three ego states - Parent, Adult, and Child. The ego states are treated as separate knowledge structures with their own perspectives and reasoning styles. To enrich their response process, they have access to an information retrieval mechanism that allows them to retrieve relevant contextual information from their vector stores. This architecture is evaluated through ablation tests in a simulated dialogue scenario, comparing agents with and without information retrieval. The results are promising and open up new directions for exploring how psychologically grounded structures can enrich agent behavior. The contribution is an agent architecture that integrates Transactional Analysis theory with contextual information retrieval to enhance the realism of LLM-based multi-agent simulations.
Similar Papers
TACLA: An LLM-Based Multi-Agent Tool for Transactional Analysis Training in Education
Multiagent Systems
AI learns to act like real people.
Beyond Static Responses: Multi-Agent LLM Systems as a New Paradigm for Social Science Research
Multiagent Systems
Helps computers study how people act together.
From Single to Societal: Analyzing Persona-Induced Bias in Multi-Agent Interactions
Multiagent Systems
AI agents show unfair bias based on fake personalities.