Score: 0

Socially Interactive Agents for Preserving and Transferring Tacit Knowledge in Organizations

Published: August 27, 2025 | arXiv ID: 2508.19942v1

By: Martin Benderoth , Patrick Gebhard , Christian Keller and more

Potential Business Impact:

AI helps share secret work knowledge.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

This paper introduces a novel approach to tackle the challenges of preserving and transferring tacit knowledge--deep, experience-based insights that are hard to articulate but vital for decision-making, innovation, and problem-solving. Traditional methods rely heavily on human facilitators, which, while effective, are resource-intensive and lack scalability. A promising alternative is the use of Socially Interactive Agents (SIAs) as AI-driven knowledge transfer facilitators. These agents interact autonomously and socially intelligently with users through multimodal behaviors (verbal, paraverbal, nonverbal), simulating expert roles in various organizational contexts. SIAs engage employees in empathic, natural-language dialogues, helping them externalize insights that might otherwise remain unspoken. Their success hinges on building trust, as employees are often hesitant to share tacit knowledge without assurance of confidentiality and appreciation. Key technologies include Large Language Models (LLMs) for generating context-relevant dialogue, Retrieval-Augmented Generation (RAG) to integrate organizational knowledge, and Chain-of-Thought (CoT) prompting to guide structured reflection. These enable SIAs to actively elicit knowledge, uncover implicit assumptions, and connect insights to broader organizational contexts. Potential applications span onboarding, where SIAs support personalized guidance and introductions, and knowledge retention, where they conduct structured interviews with retiring experts to capture heuristics behind decisions. Success depends on addressing ethical and operational challenges such as data privacy, algorithmic bias, and resistance to AI. Transparency, robust validation, and a culture of trust are essential to mitigate these risks.

Page Count
4 pages

Category
Computer Science:
Human-Computer Interaction