BLANKET: Anonymizing Faces in Infant Video Recordings
By: Ditmar Hadera , Jan Cech , Miroslav Purkrabek and more
Potential Business Impact:
Hides baby faces in videos, keeps expressions.
Ensuring the ethical use of video data involving human subjects, particularly infants, requires robust anonymization methods. We propose BLANKET (Baby-face Landmark-preserving ANonymization with Keypoint dEtection consisTency), a novel approach designed to anonymize infant faces in video recordings while preserving essential facial attributes. Our method comprises two stages. First, a new random face, compatible with the original identity, is generated via inpainting using a diffusion model. Second, the new identity is seamlessly incorporated into each video frame through temporally consistent face swapping with authentic expression transfer. The method is evaluated on a dataset of short video recordings of babies and is compared to the popular anonymization method, DeepPrivacy2. Key metrics assessed include the level of de-identification, preservation of facial attributes, impact on human pose estimation (as an example of a downstream task), and presence of artifacts. Both methods alter the identity, and our method outperforms DeepPrivacy2 in all other respects. The code is available as an easy-to-use anonymization demo at https://github.com/ctu-vras/blanket-infant-face-anonym.
Similar Papers
Controllable Localized Face Anonymization Via Diffusion Inpainting
CV and Pattern Recognition
Hides faces in pictures while keeping them useful.
NullFace: Training-Free Localized Face Anonymization
CV and Pattern Recognition
Keeps faces private while showing important details.
FaceCloak: Learning to Protect Face Templates
CV and Pattern Recognition
Hides faces so computers can't copy them.