Undermining Facial Recognition AI

Artificial intelligence (AI) systems learn in a way that is similar to the way humans learn. You may not remember the face of every person you meet, but if you meet them on several occasions, you probably will. It is possible, however, that the person you have learned to recognize may change their hairstyle or grow a beard, which may be enough to make you lose confidence in your identification. It’s the same with facial recognition systems. The difference is that neural networks, or AI, does not look at a picture or a face holistically, but analyzes it at the pixel level. Instead of a top down approach, it uses a bottom up approach. In the end, it may reach the same conclusion as a human, but not always.

In a study done on this problem by the University of Berkely, researchers Daniel Geng and Rishi Veerapaneni explain the many ways AI can be tricked to arrive at erroneous conclusions. For example, few, if any, humans would have a problem identifying this picture as a panda. In fact, my guess would be that nearly 100% of people would arrive at this conclusion.

However, AI is not so sure. In fact, when shown the picture, it had only a 57% confidence in this being a panda. The researchers, then, manipulated the image by adding “noise” in the form of pixel changes.

Noise

The resulting manipulated image, shown below, was still identified as a panda by humans but, with 99% confidence, AI thought it was a picture of a gibbon.

Now, imagine that the panda is identified as a criminal by using facial recognition AI. (Okay, this takes some imagining but bear with me.) Imagine also that Mr. Panda is aware of this, but, because he understands how facial recognition systems work, he is able to manipulate the image to fool AI into identifying Mr. Gibbon as the true criminal. In fact, why stop there? Why not have AI identify him as the queen of England? In this endeavor, the panda is using the technique of constructing adversarial examples. He is, in fact, taking advantage of deep flaws that underlie all AI recognition systems. The researchers conclude that “systems that incorporate deep learning models actually have a very high security risk.”

But doesn’t this mean that the human brain, which is basically a neural network computer, can be just as easily manipulated? The answer is, yes. So before you laugh at AI identifying a panda as a gibbon, let me ask you to identify this image.

Okay, so it’s a parrot, right? Actually, you’d be wrong if you thought so. It’s actually a woman painted to look like a parrot. Still, can’t see it? That’s because your brain doesn’t want to see it and it’s very hard to change its conclusion, so I put together this explanation.

In other words, maybe we humans can treat AI with a little more sympathy.

The parrot-woman trick was intentionally constructed. Other tricks can be used simply to confuse AI algorithms without foisting the blame on one particular image like Mr. Panda did to Mr. Gibbon. It is relatively easy to confuse AI systems with noise, but it is also possible to train AI to misidentify one object as another. Here, the AI has been almost completely fooled into thinking it sees the number six.

This brings us to Clearview.ai. Clearview.ai is used by law enforcement and advertises itself as a “revolutionary, web-based identity platform to help generate high-quality investigative leads.” It claims to have a database of over 3 billion images gleaned from “public-only web sources, including news media, mugshot websites, public social media, and many other open sources.” So the odds are good that your image is on this website. There is no doubt that such a website service has its place. Who wouldn’t want to identify a killer or a terrorist? However, given the information presented above, wouldn’t it be possible for you to be misidentified as a criminal? Wouldn’t it be possible for someone to target you and make you look guilty of a crime you did not commit? Sure. That’s why other websites have arisen that will help you hide your online identity.

Among the best known image cloakers is a free service called Fawkes. According to its developers, Fawkes “poisons models that try to learn what you look like.” Fawkes software allows users to alter pixels, as described previously, to alter any photos of themselves they upload to the internet, tricking any facial recognition algorithms that try to identify them. To humans, the photos appear to be unaltered.

Fawkes claims to be “100% effective against state-of-the-art facial recognition models (Microsoft Azure Face API, Amazon Rekognition, and Face++).” But, in a recent post, they alerted users that Microsoft Azure’s face recognition algorithm has been altered to circumvent one version of Fawkes cloaking. In turn, Fawkes circumvented Azure’s circumvention. So, it appears that the battle for your privacy has intensified.

It is important to keep in mind that facial recognition systems don’t think. They reach emotionless conclusions based on the preponderance of evidence. They lack the life experience of humans which enables humans to adjust their conclusions in ways that are beyond the scope of AI. They may not realize that humans wear hats or change their hair color. Humans may understand when someone tries to make a funny face, but AI has no sense of humor.

So, with better training, by exposing the AI to more variations on the same human face,  facial recognition systems may achieve higher success rates, but they will never be perfect, just as humans are not perfect. But as long as humans hold the edge in life experience, they will always be one step ahead of facial recognition systems and will be able to undermine them.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s