Going Deep Into DeepFakes – Part 4 – How Humanity Can Persevere Against DeepFakes

Going Deep Into DeepFakes – Part 4 – How Humanity Can Persevere Against DeepFakes

If you’ve been paying any attention at all to what’s going on, you must have heard about DeepFakes. In case you happened to have not, DeepFake is a technology that uses Deep Learning (a promising… and delivering!… breakthrough technique in Artificial Intelligence) to take an existing video (e.g., a scene from a movie, an interview, an Oscar awards thank you speech, a political debate or your personal video) and convincingly overlay the image of a different person into it so that it looks like its the new person that was filmed. And I want to iterate convincingly. So convincingly, in fact, that politicians are scared of the impact of DeepFakes. They are scared that someone will deliver a subversive message using their appearance and all watchers will be none-the-wiser. In fact, you can see an example for yourself:

In Going Deep Into DeepFakes – Part 1 – Don’t Believe Everything You See, we covered the latest happenings in traditional, or visual, DeepFakes.

In Going Deep Into DeepFakes – Part 2 – Don’t Believe Everything You Hear, we covered the latest happenings in audio DeepFakes, also known as “voice impersonation” and “voice transfer”.

In Going Deep Into DeepFakes – Part 3 – Don’t Believe Everything You Read, we covered the latest happenings in text DeepFakes.

In this post, we’ll be covering my predictions for the future and how DeepFakes can be successfully combated.

Scam Field Day

In the age of DeepFakes, anyone would be wise to critically examine an unverified or unsigned message. After all, as you’ve seen in the previous blog posts if you read them, in the age of DeepFakes, a scammer can quickly whip up an indistinguishable-from-reality live video of a family member or your boss asking you to transfer funds before their flight takes off.

Fortunately, this is not the end of communication.

The Solution

In the future we will have cryptographic signatures associated with every media. It will be like a fingerprint, to prove that the message (video/audio/text) really did come from where it appears to have, as well as from the TIME that it appears to have come.

For example, suppose a political candidate’s video emerges. In this video, the political candidate says something incredibly offensive. Right now, who knows – it might be a real video or a DeepFake. In the future, the video’s cryptographic signature will be checked and confirmed to belong to the political candidate or his/her party. 

Similarly, that urgent emergency message from your boss will be checked for legitimacy.

Verified, False, Unverified and Unsigned Messages

Messages (and by that, I’m including all media), will come in four varieties. Those that have been cryptographically verified, those that have been cryptographically verified to be false, those that aren’t signed and those that are signed but have not been checked.

Each type of message will have its place. Even a fake message contains information that can be learned from, such as an agenda. Messages that are unsigned will be like anonymous messages in the dark web – messages about whose origins no one can be sure.

Economics and Certification

Just like certifications and notarization have financial value, so will certifying a message have value and a cost, though not necessarily to the person signing the message.

For example, TV content will be verified by the TV station. The TV station itself will be verified by the individual (or his/her software).

There will also be services (subscription or otherwise) that verify messages for their customers.

The Happy Upside

One of the greatest upsides, and an interesting irony, is that living in a world where everything requires being verified would mean that everything can be trusted once it has been checked. The irony is that our world does not offer such trust. For example, if a man or woman dressed in police uniform accosts you, you have no good way to verify that they are indeed police. Seeing a badge is no proof if you can’t tell a fake badge from a real one, or know that the badge wasn’t stolen. But in the age of DeepFakes, verification of a person’s credentials and position will be part and parcel of how the world works.

What Now

So it’s not all gloom and doom about how DeepFakes are going to destroy everything. In fact, I’m optimistic that DeepFakes will lead us to solutions that will make the world a better place.

Now, if you want learn about DeepFakes, join the frontier of the future economy and cybersecurity, then pick up a copy of the Machine Learning for Cybersecurity Cookbook and enroll in Machine Learning for Red Team Hackers.

Dr. Emmanuel Tsukerman

Award-Winning Cybersecurity Data Scientist Dr. Tsukerman graduated from Stanford University and UC Berkeley. In 2017, his machine-learning-based anti-ransomware product won Top 10 Ransomware Products by PC Magazine. In 2018, he designed a machine-learning-based malware detection system for Palo Alto Network’s WildFire service (over 30k customers). In 2019, Dr. Tsukerman authored the Machine Learning for Cybersecurity Cookbook and launched the Cybersecurity Data Science Course and Machine Learning for Red Team Hackers Course.