A Study of Hybrid and Evolutionary Metaheuristics for Single Hidden Layer Feedforward Neural Network Architecture
By: Gautam Siddharth Kashyap, Md Tabrez Nafis, Samar Wazir
Potential Business Impact:
Makes computer learning faster and more accurate.
Training Artificial Neural Networks (ANNs) with Stochastic Gradient Descent (SGD) frequently encounters difficulties, including substantial computing expense and the risk of converging to local optima, attributable to its dependence on partial weight gradients. Therefore, this work investigates Particle Swarm Optimization (PSO) and Genetic Algorithms (GAs) - two population-based Metaheuristic Optimizers (MHOs) - as alternatives to SGD to mitigate these constraints. A hybrid PSO-SGD strategy is developed to improve local search efficiency. The findings indicate that the hybrid PSO-SGD technique decreases the median training MSE by 90 to 95 percent relative to conventional GA and PSO across various network sizes (e.g., from around 0.02 to approximately 0.001 in the Sphere function). RMHC attains substantial enhancements, reducing MSE by roughly 85 to 90 percent compared to GA. Simultaneously, RS consistently exhibits errors exceeding 0.3, signifying subpar performance. These findings underscore that hybrid and evolutionary procedures significantly improve training efficiency and accuracy compared to conventional optimization methods and imply that the Building Block Hypothesis (BBH) may still be valid, indicating that advantageous weight structures are retained during evolutionary search.
Similar Papers
Sequential, Parallel and Consecutive Hybrid Evolutionary-Swarm Optimization Metaheuristics
Neural and Evolutionary Computing
Boosts problem-solving for tough math puzzles
Constrained Hybrid Metaheuristic Algorithm for Probabilistic Neural Networks Learning
Neural and Evolutionary Computing
Makes computers learn and guess better.
Robustness and Invariance of Hybrid Metaheuristics under Objective Function Transformations
Neural and Evolutionary Computing
Makes computer problem-solvers work better everywhere.