AFarePart: Accuracy-aware Fault-resilient Partitioner for DNN Edge Accelerators
By: Mukta Debnath , Krishnendu Guha , Debasri Saha and more
Potential Business Impact:
Makes AI work even when parts break.
Deep Neural Networks (DNNs) are increasingly deployed across distributed and resource-constrained platforms, such as System-on-Chip (SoC) accelerators and edge-cloud systems. DNNs are often partitioned and executed across heterogeneous processing units to optimize latency and energy. However, the reliability of these partitioned models under hardware faults and communication errors remains a critical yet underexplored topic, especially in safety-critical applications. In this paper, we propose an accuracy-aware, fault-resilient DNN partitioning framework targeting multi-objective optimization using NSGA-II, where accuracy degradation under fault conditions is introduced as a core metric alongside energy and latency. Our framework performs runtime fault injection during optimization and utilizes a feedback loop to prioritize fault-tolerant partitioning. We evaluate our approach on benchmark CNNs including AlexNet, SqueezeNet and ResNet18 on hardware accelerators, and demonstrate up to 27.7% improvement in fault tolerance with minimal increase in performance overhead. Our results highlight the importance of incorporating resilience into DNN partitioning, and thereby paving the way for robust AI inference in error-prone environments.
Similar Papers
Adaptive AI Model Partitioning over 5G Networks
Networking and Internet Architecture
Lets phones run smart apps without draining battery.
Analysis of Single Event Induced Bit Faults in a Deep Neural Network Accelerator Pipeline
Hardware Architecture
Protects AI chips from radiation damage.
Joint Partitioning and Placement of Foundation Models for Real-Time Edge AI
Distributed, Parallel, and Cluster Computing
Lets AI work better on phones and other devices.