An Empirical Study of the Realism of Mutants in Deep Learning
By: Zaheed Ahmed, Philip Makedonski, Jens Grabowski
Mutation analysis is a well-established technique for assessing test quality in the traditional software development paradigm by injecting artificial faults into programs. Its application to deep learning (DL) has expanded beyond classical testing to support tasks such as fault localization, repair, data generation, and model robustness evaluation. The core assumption is that mutants behave similarly to real faults, an assumption well established in traditional software systems but largely unverified for DL. This study presents the first empirical comparison of pre-training and post-training mutation approaches in DL with respect to realism. We introduce a statistical framework to quantify their coupling strength and behavioral similarity to real faults using publicly available bugs datasets: CleanML, DeepFD, DeepLocalize, and defect4ML. Mutants are generated using state-of-the-art tools representing both approaches. Results show that pre-training mutants exhibit consistently stronger coupling and higher behavioral similarity to real faults than post-training mutants, indicating greater realism. However, the substantial computational cost of pre-training mutation underscores the need for more effective post-training operators that match or exceed the realism demonstrated by pre-training mutants.
Similar Papers
Using Fourier Analysis and Mutant Clustering to Accelerate DNN Mutation Testing
Software Engineering
Tests computer brains faster with fewer checks.
Mutation Testing for Industrial Robotic Systems
Robotics
Makes robot software safer by finding hidden bugs.
XMutant: XAI-based Fuzzing for Deep Learning Systems
Software Engineering
Finds more bugs in AI faster.