Deepfake Videos: When All News Becomes Fake News

Most people can tell whether a video is fake or not. It may be a very good fake but something about it just looks, well, unnatural. We may not be able to express why a fake video leaves us with this feeling, but we simply know it from our interactions with real people. If we comprehensibly analyzed why people determined a video to be fake, we would see that it came down to something as simple as a strange blink or an unusual movement of the head.

But what if we could eliminate all uncharacteristic movements from a fake video by directly linking it to every minor movement performed by an actual human subject? This can be done by using a neural net which can be ‘taught’ to learn certain natural, predictable, and idiosyncratic patterns of a targeted person’s movements from a video. The teaching video will serve to train the neural net on how a certain targeted person normally behaves. After the neural net learns how a certain individual behaves, an actor can then make a separate video. This is called the, ‘source video’ and this is what would be sent to the neural net. The neural net translates the actions on the source video to make a fake or target video. Thus, an actor in a source video can make a target do whatever they want them to do. Having been already trained on the natural movements of the target, the neural net ‘predicts’ how the targeted person would act and makes the fake video accordingly.  Here is a visual depiction of the process from the paper, Deep Video Portraits by Kim et al. which shows how an actor transfers his movements to a fake video of Barrack Obama.

deep fake source obama

To keep this from being too technical, information from the source video (upper row) is interpreted by the neural net to produce the target video (lower row). in this way, the actor makes the target act in the way that he wants it to.

Take particular note of how the background corresponds with realistic head movements. This marks a major accomplishment in these fake videos.

The researchers made a number of such ‘fake’ videos and then asked people to determine whether they were watching a real or fake video. Interestingly, about 80% of people thought the real videos were actually fake. About 50% of people correctly identified the fake video. It should be kept in mind that these people knew in advance that they would likely be ‘tricked’ into thinking a fake video might be real and, in consequence, were probably hyper critical. Both percentages would be much higher for unprepared viewers. The researchers admit that sudden, unusual head movements or quick changes in facial expressions in the source video may produce results that would appear to be unrealistic or fake. Would you be able to tell which is the source and which is the target, or fake video, in the sample below?

 

Yeah, I couldn’t either. The fake is on the right.

All of this about creating a perfect fake video is well and good, but this only takes into account the video portion of a fake video. What about reproducing voices? Most often, this tends to be the weak point in these videos. People can tell if a voice sounds ‘robotic’ even when they may be fooled by a video. That said, there have been some major advances in this area. Last year, a new startup, Lyrebird, produced this audio clip of Donald Trump, Barrack Obama, and Hillary Clinton talking about their firm. The overall vocal characteristics aren’t bad, but the stress, emphasis, and pacing make the track easily identifiable as a fake. Take a listen.

Of course, with more training, these voices have improved. The latest samples are better but they are still a little muddy and not altogether convincing.

That said, we are rapidly closing in on the day when a video will appear which will fool us all. We simply won’t be able to tell whether it is real or not. The first videos to do this will contain little or no speaking. Only by looking at fine details will anyone be able to determine whether what we see happening is really happening. Later, the videos will be so good that even a forensic examination could leave us with some degree of doubt.

Of course, there will be those who try to benefit from these realistic but fake videos. If the surrounding circumstances seem believable, viewers could be persuaded that the video is, in fact, real. These could be videos of politicians saying outrageous things or simply behaving badly. Even if the videos are discounted as probably fake, they could, nonetheless, instill doubt and will tend to tarnish the targeted person’s character. If, as will probably be the case, the videos are made to discredit a politician, those who want to believe the content of these videos probably will.

But fake videos open up a two-way street. Politicians who actually are caught behaving improperly on videos could claim that the video is fake. Again, those who want to believe it is fake, probably will.

The Defense Department has been working on programs that will determine whether a video is fake or not. They were initially successful in determining that the people in fake videos did not blink. Unfortunately, this is no longer true. The new techniques are far too advanced to make such mistakes. In fact, every time a program learns a video is fake, it helps the neural net. This is simply how neural nets learn. The neural net computers that now routinely beat chess grandmasters are able to do so because they learned from playing and losing to them. In other words, losing is winning for a neural net. This is why fake videos are destined to become indistinguishable from real videos.

So what does this all mean? At some point, no one will be sure which videos are fake and which are real. All news programs showing compromising videos will come with the disclaimer that they could not authenticate them. Eventually, that won’t even matter. All news, all truth will become muddied. Viewers will believe what they want to believe and will choose the media outlet that matches those views. Media outlets will pander to their bases by choosing videos, fake or not, which support certain political viewpoints. In the end, all news will be fake news.

 

 

About Steve Mierzejewski

Marketing consultant for InZero Systems, developer of the next generation in hardware-separated security, WorkPlay Technologies, TrustWall and Mobile bare-metal virtualization. I've worked in Poland, Japan, Korea, China, and Afghanistan. I'm a writer, technical editor, and an educator.
This entry was posted in Uncategorized. Bookmark the permalink.

One Response to Deepfake Videos: When All News Becomes Fake News

  1. Pingback: Conservatives Targeted for Midterm Election Scams | Secure Your Workplace Network

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s