Score: 0

Interpretable Learning Dynamics in Unsupervised Reinforcement Learning

Published: May 6, 2025 | arXiv ID: 2505.06279v1

By: Shashwat Pandey

Potential Business Impact:

Helps robots learn faster by watching what's interesting.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We present an interpretability framework for unsupervised reinforcement learning (URL) agents, aimed at understanding how intrinsic motivation shapes attention, behavior, and representation learning. We analyze five agents DQN, RND, ICM, PPO, and a Transformer-RND variant trained on procedurally generated environments, using Grad-CAM, Layer-wise Relevance Propagation (LRP), exploration metrics, and latent space clustering. To capture how agents perceive and adapt over time, we introduce two metrics: attention diversity, which measures the spatial breadth of focus, and attention change rate, which quantifies temporal shifts in attention. Our findings show that curiosity-driven agents display broader, more dynamic attention and exploratory behavior than their extrinsically motivated counterparts. Among them, TransformerRND combines wide attention, high exploration coverage, and compact, structured latent representations. Our results highlight the influence of architectural inductive biases and training signals on internal agent dynamics. Beyond reward-centric evaluation, the proposed framework offers diagnostic tools to probe perception and abstraction in RL agents, enabling more interpretable and generalizable behavior.

Country of Origin
🇺🇸 United States

Page Count
13 pages

Category
Computer Science:
Machine Learning (CS)