Score: 0

Multi-Agent Reinforcement Learning for Sample-Efficient Deep Neural Network Mapping

Published: July 22, 2025 | arXiv ID: 2507.16249v1

By: Srivatsan Krishnan , Jason Jabbour , Dan Zhang and more

Potential Business Impact:

Makes computer chips run faster and use less power.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Mapping deep neural networks (DNNs) to hardware is critical for optimizing latency, energy consumption, and resource utilization, making it a cornerstone of high-performance accelerator design. Due to the vast and complex mapping space, reinforcement learning (RL) has emerged as a promising approach-but its effectiveness is often limited by sample inefficiency. We present a decentralized multi-agent reinforcement learning (MARL) framework designed to overcome this challenge. By distributing the search across multiple agents, our framework accelerates exploration. To avoid inefficiencies from training multiple agents in parallel, we introduce an agent clustering algorithm that assigns similar mapping parameters to the same agents based on correlation analysis. This enables a decentralized, parallelized learning process that significantly improves sample efficiency. Experimental results show our MARL approach improves sample efficiency by 30-300x over standard single-agent RL, achieving up to 32.61x latency reduction and 16.45x energy-delay product (EDP) reduction under iso-sample conditions.

Page Count
10 pages

Category
Computer Science:
Machine Learning (CS)