NeuroScalar: A Deep Learning Framework for Fast, Accurate, and In-the-Wild Cycle-Level Performance Prediction
By: Shayne Wadle , Yanxin Zhang , Vikas Singh and more
Potential Business Impact:
Tests new computer chips super fast.
The evaluation of new microprocessor designs is constrained by slow, cycle-accurate simulators that rely on unrepresentative benchmark traces. This paper introduces a novel deep learning framework for high-fidelity, ``in-the-wild'' simulation on production hardware. Our core contribution is a DL model trained on microarchitecture-independent features to predict cycle-level performance for hypothetical processor designs. This unique approach allows the model to be deployed on existing silicon to evaluate future hardware. We propose a complete system featuring a lightweight hardware trace collector and a principled sampling strategy to minimize user impact. This system achieves a simulation speed of 5 MIPS on a commodity GPU, imposing a mere 0.1% performance overhead. Furthermore, our co-designed Neutrino on-chip accelerator improves performance by 85x over the GPU. We demonstrate that this framework enables accurate performance analysis and large-scale hardware A/B testing on a massive scale using real-world applications.
Similar Papers
Bare-Metal RISC-V + NVDLA SoC for Efficient Deep Learning Inference
Hardware Architecture
Makes smart devices run AI much faster.
From Principles to Practice: A Systematic Study of LLM Serving on Multi-core NPUs
Hardware Architecture
Makes AI understand faster on special chips.
CHIPSIM: A Co-Simulation Framework for Deep Learning on Chiplet-Based Systems
Hardware Architecture
Tests computer chips faster and more accurately.