Illuminating the Black Box of Reservoir Computing
By: Claus Metzner , Achim Schilling , Thomas Kinfe and more
Potential Business Impact:
Makes computers learn with fewer parts.
Reservoir computers, based on large recurrent neural networks with fixed random connections, are known to perform a wide range of information processing tasks. However, the nature of data transformations within the reservoir, the interplay of input matrix, reservoir, and readout layer, as well as the effect of varying design parameters remain poorly understood. In this study, we shift the focus from performance maximization to systematic simplification, aiming to identify the minimal computational ingredients required for different model tasks. We examine how many neurons, how much nonlinearity, and which connective structure is necessary and sufficient to perform certain tasks, considering also neurons with non-sigmoidal activation functions and networks with non-random connectivity. Surprisingly, we find non-trivial cases where the readout layer performs the bulk of the computation, with the reservoir merely providing weak nonlinearity and memory. Furthermore, design aspects often considered secondary, such as the structure of the input matrix, the steepness of activation functions, or the precise input/output timing, emerge as critical determinants of system performance in certain tasks.
Similar Papers
Neuronal correlations shape the scaling behavior of memory capacity and nonlinear computational capability of reservoir recurrent neural networks
Disordered Systems and Neural Networks
Makes computers learn faster with more brain cells.
Towards a Comprehensive Theory of Reservoir Computing
Neural and Evolutionary Computing
Predicts how well computer memory systems work.
Reservoir Computing: A New Paradigm for Neural Networks
Machine Learning (CS)
Makes computers learn from messy, changing information.