MoWE : A Mixture of Weather Experts
By: Dibyajyoti Chakraborty , Romit Maulik , Peter Harrington and more
Potential Business Impact:
Combines weather forecasts for better predictions.
Data-driven weather models have recently achieved state-of-the-art performance, yet progress has plateaued in recent years. This paper introduces a Mixture of Experts (MoWE) approach as a novel paradigm to overcome these limitations, not by creating a new forecaster, but by optimally combining the outputs of existing models. The MoWE model is trained with significantly lower computational resources than the individual experts. Our model employs a Vision Transformer-based gating network that dynamically learns to weight the contributions of multiple "expert" models at each grid point, conditioned on forecast lead time. This approach creates a synthesized deterministic forecast that is more accurate than any individual component in terms of Root Mean Squared Error (RMSE). Our results demonstrate the effectiveness of this method, achieving up to a 10% lower RMSE than the best-performing AI weather model on a 2-day forecast horizon, significantly outperforming individual experts as well as a simple average across experts. This work presents a computationally efficient and scalable strategy to push the state of the art in data-driven weather prediction by making the most out of leading high-quality forecast models.
Similar Papers
Knowledge-Guided Adaptive Mixture of Experts for Precipitation Prediction
Artificial Intelligence
Predicts rain better by combining different weather data.
Wavelet Mixture of Experts for Time Series Forecasting
Machine Learning (CS)
Predicts future events more accurately with less data.
A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and Applications
Machine Learning (CS)
Makes smart computer programs use less power.