Scaling and context steer LLMs along the same computational path as the human brain
By: Joséphine Raugel , Stéphane d'Ascoli , Jérémy Rapin and more
Potential Business Impact:
Brain and AI process information in a similar order.
Recent studies suggest that the representations learned by large language models (LLMs) are partially aligned to those of the human brain. However, whether and why this alignment score arises from a similar sequence of computations remains elusive. In this study, we explore this question by examining temporally-resolved brain signals of participants listening to 10 hours of an audiobook. We study these neural dynamics jointly with a benchmark encompassing 22 LLMs varying in size and architecture type. Our analyses confirm that LLMs and the brain generate representations in a similar order: specifically, activations in the initial layers of LLMs tend to best align with early brain responses, while the deeper layers of LLMs tend to best align with later brain responses. This brain-LLM alignment is consistent across transformers and recurrent architectures. However, its emergence depends on both model size and context length. Overall, this study sheds light on the sequential nature of computations and the factors underlying the partial convergence between biological and artificial neural networks.
Similar Papers
Large Language Models Show Signs of Alignment with Human Neurocognition During Abstract Reasoning
Neurons and Cognition
Computers learn to think like humans.
Exploring Similarity between Neural and LLM Trajectories in Language Processing
Human-Computer Interaction
Shows how computers "think" like brains.
Large Language Models as Model Organisms for Human Associative Learning
Machine Learning (CS)
Helps computers learn like brains, remembering new things.