Synthetic Data Generation for Emotional Depth Faces: Optimizing Conditional DCGANs via Genetic Algorithms in the Latent Space and Stabilizing Training with Knowledge Distillation
By: Seyed Muhammad Hossein Mousavi, S. Younes Mirinezhad
Potential Business Impact:
Creates fake faces to better read emotions.
Affective computing faces a major challenge: the lack of high-quality, diverse depth facial datasets for recognizing subtle emotional expressions. We propose a framework for synthetic depth face generation using an optimized GAN with Knowledge Distillation (EMA teacher models) to stabilize training, improve quality, and prevent mode collapse. We also apply Genetic Algorithms to evolve GAN latent vectors based on image statistics, boosting diversity and visual quality for target emotions. The approach outperforms GAN, VAE, GMM, and KDE in both diversity and quality. For classification, we extract and concatenate LBP, HOG, Sobel edge, and intensity histogram features, achieving 94% and 96% accuracy with XGBoost. Evaluation using FID, IS, SSIM, and PSNR shows consistent improvement over state-of-the-art methods.
Similar Papers
Emotion Detection Using Conditional Generative Adversarial Networks (cGAN): A Deep Learning Approach
Machine Learning (CS)
Computers understand your feelings from voice, text, face.
Identity-Preserving Aging and De-Aging of Faces in the StyleGAN Latent Space
CV and Pattern Recognition
Changes face age while keeping the person's look.
A Comparative Study on Synthetic Facial Data Generation Techniques for Face Recognition
CV and Pattern Recognition
Makes fake faces help computers recognize real ones.