Beyond Attention: Toward Machines with Intrinsic Higher Mental States
By: Ahsan Adeel
Potential Business Impact:
Helps computers learn faster by focusing on important parts.
Attending to what is relevant is fundamental to both the mammalian brain and modern machine learning models such as Transformers. Yet, determining relevance remains a core challenge, traditionally offloaded to learning algorithms like backpropagation. Inspired by recent cellular neurobiological evidence linking neocortical pyramidal cells to distinct mental states, this work shows how models (e.g., Transformers) can emulate high-level perceptual processing and awake thought (imagination) states to pre-select relevant information before applying attention. Triadic neuronal-level modulation loops among questions ($Q$), clues (keys, $K$), and hypotheses (values, $V$) enable diverse, deep, parallel reasoning chains at the representation level and allow a rapid shift from initial biases to refined understanding. This leads to orders-of-magnitude faster learning with significantly reduced computational demand (e.g., fewer heads, layers, and tokens), at an approximate cost of $\mathcal{O}(N)$, where $N$ is the number of input tokens. Results span reinforcement learning (e.g., CarRacing in a high-dimensional visual setup), computer vision, and natural language question answering.
Similar Papers
Attention via Synaptic Plasticity is All You Need: A Biologically Inspired Spiking Neuromorphic Transformer
Neural and Evolutionary Computing
Makes AI use much less power, like a brain.
Understanding Transformers through the Lens of Pavlovian Conditioning
Machine Learning (CS)
AI learns like dogs by remembering what works.
Multihead self-attention in cortico-thalamic circuits
Neurons and Cognition
Brain circuits compute like AI's smart attention.