On Advancements of the Forward-Forward Algorithm
By: Mauricio Ortiz Torres, Markus Lange, Arne P. Raulf
Potential Business Impact:
Makes computers learn better with less memory.
The Forward-Forward algorithm has evolved in machine learning research, tackling more complex tasks that mimic real-life applications. In the last years, it has been improved by several techniques to perform better than its original version, handling a challenging dataset like CIFAR10 without losing its flexibility and low memory usage. We have shown in our results that improvements are achieved through a combination of convolutional channel grouping, learning rate schedules, and independent block structures during training that lead to a 20\% decrease in test error percentage. Additionally, to approach further implementations on low-capacity hardware projects, we have presented a series of lighter models that achieve low test error percentages within (21$\pm$3)\% and number of trainable parameters between 164,706 and 754,386. This serves as a basis for our future study on complete verification and validation of these kinds of neural networks.
Similar Papers
Scalable Forward-Forward Algorithm
Machine Learning (CS)
Trains computer brains without needing to go backward.
The Forward-Forward Algorithm: Characterizing Training Behavior
Machine Learning (CS)
Teaches computers to learn faster, layer by layer.
Mono-Forward: Backpropagation-Free Algorithm for Efficient Neural Network Training Harnessing Local Errors
Machine Learning (CS)
Teaches computers faster with less memory.