De-identifying Video Data Demonstration
While this demonstration provides examples of how realistic de-identified faces may appear, this demonstration also provides a CAPTCHA, which is a program that attempts to tell humans apart from computers. This demonstration poses a different use of de-identified face images than their original purpose for data sharing with privacy protection.
As a CAPTCHA, our goal in this demonstration is to technically characterize (in eigenvectors and PCA terms), the circumstances in which humans perform better than existing face recognition software. We believe our ability to do so may lead to insights for improving face recognition software. Such insights would still not thwart the privacy protection provided.
In order to demonstrate our claim, the images in the demonstration are not just a randomly selected set of images. Instead, they are images that were selected based on algorithmically selecting an optimal set of face images for which we believe humans would be good at recognizing and face recognition software would be very bad. While several proofs are yet to be made, this demonstration does seem to bear this out (so far).
If so, a corollary to this claim, would be that careful selection of a training set and a gallery set can seriously inflate recognition results over what would be realized in the general case. This is achieved by inverting the criteria in the face set selection algorithm mentioned above. That is, we could have alternatively, selected an optimal set of faces in which face recognition would have done very well and humans very poorly.
[Demonstration] [Preferences] [2-anonymized images] [Untouched images] [De-identifying Video Project]
Tell me more about the