Numerical Investigation of Sequence Modeling Theory using Controllable Memory Functions
By: Haotian Jiang , Zeyu Bao , Shida Wang and more
Potential Business Impact:
Tests computer models on how well they remember things.
The evolution of sequence modeling architectures, from recurrent neural networks and convolutional models to Transformers and structured state-space models, reflects ongoing efforts to address the diverse temporal dependencies inherent in sequential data. Despite this progress, systematically characterizing the strengths and limitations of these architectures remains a fundamental challenge. In this work, we propose a synthetic benchmarking framework to evaluate how effectively different sequence models capture distinct temporal structures. The core of this approach is to generate synthetic targets, each characterized by a memory function and a parameter that determines the strength of temporal dependence. This setup allows us to produce a continuum of tasks that vary in temporal complexity, enabling fine-grained analysis of model behavior concerning specific memory properties. We focus on four representative memory functions, each corresponding to a distinct class of temporal structures. Experiments on several sequence modeling architectures confirm existing theoretical insights and reveal new findings. These results demonstrate the effectiveness of the proposed method in advancing theoretical understanding and highlight the importance of using controllable targets with clearly defined structures for evaluating sequence modeling architectures.
Similar Papers
A Neuro-Symbolic Framework for Sequence Classification with Relational and Temporal Knowledge
Artificial Intelligence
Teaches computers to learn with changing rules.
Uncovering the Functional Roles of Nonlinearity in Memory
Machine Learning (CS)
Makes computer memory work better and simpler.
MesaNet: Sequence Modeling by Locally Optimal Test-Time Training
Machine Learning (CS)
Makes computers understand long stories better.