Minion Gated Recurrent Unit for Continual Learning
By: Abdullah M. Zyarah, Dhireesha Kudithipudi
Potential Business Impact:
Makes smart computer programs run faster, smaller.
The increasing demand for continual learning in sequential data processing has led to progressively complex training methodologies and larger recurrent network architectures. Consequently, this has widened the knowledge gap between continual learning with recurrent neural networks (RNNs) and their ability to operate on devices with limited memory and compute. To address this challenge, we investigate the effectiveness of simplifying RNN architectures, particularly gated recurrent unit (GRU), and its impact on both single-task and multitask sequential learning. We propose a new variant of GRU, namely the minion recurrent unit (MiRU). MiRU replaces conventional gating mechanisms with scaling coefficients to regulate dynamic updates of hidden states and historical context, reducing computational costs and memory requirements. Despite its simplified architecture, MiRU maintains performance comparable to the standard GRU while achieving 2.90x faster training and reducing parameter usage by 2.88x, as demonstrated through evaluations on sequential image classification and natural language processing benchmarks. The impact of model simplification on its learning capacity is also investigated by performing continual learning tasks with a rehearsal-based strategy and global inhibition. We find that MiRU demonstrates stable performance in multitask learning even when using only rehearsal, unlike the standard GRU and its variants. These features position MiRU as a promising candidate for edge-device applications.
Similar Papers
M2RU: Memristive Minion Recurrent Unit for Continual Learning at the Edge
Machine Learning (CS)
Lets small computers learn new things without stopping.
MINIMALIST: switched-capacitor circuits for efficient in-memory computation of gated recurrent units
Hardware Architecture
Makes tiny computers remember more with less power.
MinGRU-Based Encoder for Turbo Autoencoder Frameworks
Information Theory
Makes wireless signals stronger and faster.