SuS: Strategy-aware Surprise for Intrinsic Exploration
By: Mark Kashirskiy, Ilya Makarov
We propose Strategy-aware Surprise (SuS), a novel intrinsic motivation framework that uses pre-post prediction mismatch as a novelty signal for exploration in reinforcement learning. Unlike traditional curiosity-driven methods that rely solely on state prediction error, SuS introduces two complementary components: Strategy Stability (SS) and Strategy Surprise (SuS). SS measures consistency in behavioral strategy across temporal steps, while SuS captures unexpected outcomes relative to the agent's current strategy representation. Our combined reward formulation leverages both signals through learned weighting coefficients. We evaluate SuS on mathematical reasoning tasks using large language models, demonstrating significant improvements in both accuracy and solution diversity. Ablation studies confirm that removing either component results in at least 10% performance degradation, validating the synergistic nature of our approach. SuS achieves 17.4% improvement in Pass@1 and 26.4% improvement in Pass@5 compared to baseline methods, while maintaining higher strategy diversity throughout training.
Similar Papers
Mutual Information Surprise: Rethinking Unexpectedness in Autonomous Systems
Machine Learning (CS)
Helps robots learn from surprises to improve.
Mutual Information Surprise: Rethinking Unexpectedness in Autonomous Systems
Machine Learning (CS)
Helps robots learn from surprises to improve.
SuRe: Surprise-Driven Prioritised Replay for Continual LLM Learning
Machine Learning (CS)
Teaches AI to learn new things without forgetting old ones.