Score: 0

Loop as a Bridge: Can Looped Transformers Truly Link Representation Space and Natural Language Outputs?

Published: January 15, 2026 | arXiv ID: 2601.10242v1

By: Guanxu Chen, Dongrui Liu, Jing Shao

Large Language Models (LLMs) often exhibit a gap between their internal knowledge and their explicit linguistic outputs. In this report, we empirically investigate whether Looped Transformers (LTs)--architectures that increase computational depth by iterating shared layers--can bridge this gap by utilizing their iterative nature as a form of introspection. Our experiments reveal that while increasing loop iterations narrows the gap, it is partly driven by a degradation of their internal knowledge carried by representations. Moreover, another empirical analysis suggests that current LTs' ability to perceive representations does not improve across loops; it is only present in the final loop. These results suggest that while LTs offer a promising direction for scaling computational depth, they have yet to achieve the introspection required to truly link representation space and natural language.

Category
Computer Science:
Computation and Language