Score: 1

AFarePart: Accuracy-aware Fault-resilient Partitioner for DNN Edge Accelerators

Published: December 8, 2025 | arXiv ID: 2512.07449v1

By: Mukta Debnath , Krishnendu Guha , Debasri Saha and more

Potential Business Impact:

Makes AI work even when parts break.

Business Areas:
Intelligent Systems Artificial Intelligence, Data and Analytics, Science and Engineering

Deep Neural Networks (DNNs) are increasingly deployed across distributed and resource-constrained platforms, such as System-on-Chip (SoC) accelerators and edge-cloud systems. DNNs are often partitioned and executed across heterogeneous processing units to optimize latency and energy. However, the reliability of these partitioned models under hardware faults and communication errors remains a critical yet underexplored topic, especially in safety-critical applications. In this paper, we propose an accuracy-aware, fault-resilient DNN partitioning framework targeting multi-objective optimization using NSGA-II, where accuracy degradation under fault conditions is introduced as a core metric alongside energy and latency. Our framework performs runtime fault injection during optimization and utilizes a feedback loop to prioritize fault-tolerant partitioning. We evaluate our approach on benchmark CNNs including AlexNet, SqueezeNet and ResNet18 on hardware accelerators, and demonstrate up to 27.7% improvement in fault tolerance with minimal increase in performance overhead. Our results highlight the importance of incorporating resilience into DNN partitioning, and thereby paving the way for robust AI inference in error-prone environments.

Country of Origin
🇮🇪 🇮🇳 Ireland, India

Page Count
6 pages

Category
Computer Science:
Performance