Score: 2

Scaling and context steer LLMs along the same computational path as the human brain

Published: December 1, 2025 | arXiv ID: 2512.01591v1

By: Joséphine Raugel , Stéphane d'Ascoli , Jérémy Rapin and more

Potential Business Impact:

Brain and AI process information in a similar order.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent studies suggest that the representations learned by large language models (LLMs) are partially aligned to those of the human brain. However, whether and why this alignment score arises from a similar sequence of computations remains elusive. In this study, we explore this question by examining temporally-resolved brain signals of participants listening to 10 hours of an audiobook. We study these neural dynamics jointly with a benchmark encompassing 22 LLMs varying in size and architecture type. Our analyses confirm that LLMs and the brain generate representations in a similar order: specifically, activations in the initial layers of LLMs tend to best align with early brain responses, while the deeper layers of LLMs tend to best align with later brain responses. This brain-LLM alignment is consistent across transformers and recurrent architectures. However, its emergence depends on both model size and context length. Overall, this study sheds light on the sequential nature of computations and the factors underlying the partial convergence between biological and artificial neural networks.