Emergent AI Surveillance: Overlearned Person Re-Identification and Its Mitigation in Law Enforcement Context
By: An Thi Nguyen, Radina Stoykova, Eric Arazo
Potential Business Impact:
AI can spot people even when not trained to.
Generic instance search models can dramatically reduce the manual effort required to analyze vast surveillance footage during criminal investigations by retrieving specific objects of interest to law enforcement. However, our research reveals an unintended emergent capability: through overlearning, these models can single out specific individuals even when trained on datasets without human subjects. This capability raises concerns regarding identification and profiling of individuals based on their personal data, while there is currently no clear standard on how de-identification can be achieved. We evaluate two technical safeguards to curtail a model's person re-identification capacity: index exclusion and confusion loss. Our experiments demonstrate that combining these approaches can reduce person re-identification accuracy to below 2% while maintaining 82% of retrieval performance for non-person objects. However, we identify critical vulnerabilities in these mitigations, including potential circumvention using partial person images. These findings highlight urgent regulatory questions at the intersection of AI governance and data protection: How should we classify and regulate systems with emergent identification capabilities? And what technical standards should be required to prevent identification capabilities from developing in seemingly benign applications?
Similar Papers
Person detection and re-identification in open-world settings of retail stores and public spaces
CV and Pattern Recognition
Finds the same person in different videos.
Debiased Dual-Invariant Defense for Adversarially Robust Person Re-Identification
CV and Pattern Recognition
Keeps cameras from being tricked by fake people.
TeLL Me what you cant see
CV and Pattern Recognition
Makes old photos look new for police.