NetworkFF: Unified Layer Optimization in Forward-Only Neural Networks
By: Salar Beigzad
The Forward-Forward algorithm eliminates backpropagation's memory constraints and biological implausibility through dual forward passes with positive and negative data. However, conventional implementations suffer from critical inter-layer isolation, where layers optimize goodness functions independently without leveraging collective learning dynamics. This isolation constrains representational coordination and limits convergence efficiency in deeper architectures. This paper introduces Collaborative Forward-Forward (CFF) learning, extending the original algorithm through inter-layer cooperation mechanisms that preserve forward-only computation while enabling global context integration. Our framework implements two collaborative paradigms: Fixed CFF (F-CFF) with constant inter-layer coupling and Adaptive CFF (A-CFF) with learnable collaboration parameters that evolve during training. The collaborative goodness function incorporates weighted contributions from all layers, enabling coordinated feature learning while maintaining memory efficiency and biological plausibility. Comprehensive evaluation on MNIST and Fashion-MNIST demonstrates significant performance improvements over baseline Forward-Forward implementations. These findings establish inter-layer collaboration as a fundamental enhancement to Forward-Forward learning, with immediate applicability to neuromorphic computing architectures and energy-constrained AI systems.
Similar Papers
Scalable Forward-Forward Algorithm
Machine Learning (CS)
Trains computer brains without needing to go backward.
In Search of Goodness: Large Scale Benchmarking of Goodness Functions for the Forward-Forward Algorithm
Machine Learning (CS)
Makes AI learn better by changing how it judges "good."
The Forward-Forward Algorithm: Characterizing Training Behavior
Machine Learning (CS)
Teaches computers to learn faster, layer by layer.