Misaligned from Within: Large Language Models Reproduce Our Double-Loop Learning Blindness
By: Tim Rogers, Ben Teehankee
Potential Business Impact:
AI learns our bad habits, hindering progress.
This paper examines a critical yet unexplored dimension of the AI alignment problem: the potential for Large Language Models (LLMs) to inherit and amplify existing misalignments between human espoused theories and theories-in-use. Drawing on action science research, we argue that LLMs trained on human-generated text likely absorb and reproduce Model 1 theories-in-use - a defensive reasoning pattern that both inhibits learning and creates ongoing anti-learning dynamics at the dyad, group, and organisational levels. Through a detailed case study of an LLM acting as an HR consultant, we show how its advice, while superficially professional, systematically reinforces unproductive problem-solving approaches and blocks pathways to deeper organisational learning. This represents a specific instance of the alignment problem where the AI system successfully mirrors human behaviour but inherits our cognitive blind spots. This poses particular risks if LLMs are integrated into organisational decision-making processes, potentially entrenching anti-learning practices while lending authority to them. The paper concludes by exploring the possibility of developing LLMs capable of facilitating Model 2 learning - a more productive theory-in-use - and suggests this effort could advance both AI alignment research and action science practice. This analysis reveals an unexpected symmetry in the alignment challenge: the process of developing AI systems properly aligned with human values could yield tools that help humans themselves better embody those same values.
Similar Papers
Revealing the Intrinsic Ethical Vulnerability of Aligned Large Language Models
Computation and Language
AI can still be tricked into saying bad things.
Beyond Hallucinations: The Illusion of Understanding in Large Language Models
Artificial Intelligence
Helps AI think more like humans, not just guess.
Thinking Hard, Going Misaligned: Emergent Misalignment in LLMs
Computation and Language
Makes smart computers more dangerous when they think harder.