What Is A Deepfake Attack And What Is It Used For?

What Is A Deepfake Attack

What Is A Deepfake Attack And What Is It Used For?

Deepfakes are one of those technological marvels that might just shake up the future of our communication. What is real, what can be trusted? With deepfake attacks looming around the corner, we can likely expect a rise in falsified videos and images that are so disturbingly convincing, you wouldn’t be able to tell what is real and what is fake anymore.

Exploring the ways in which these types of media-based attacks could harm individuals and societies as a whole could help us prepare for what is to come. This article will explore the growing phenomenon of malicious attacks from deepfake software, what they’re used for, and how we can defend ourselves against them when things get serious.

What Is A Deepfake Attack?

A deepfake attack is an Artificial Intelligence (AI) cyber-attack carried out by a third party that is out to harm an individual or group. Deepfake attacks come in different shapes and sizes, but are usually fake video or audio recordings meant to deceive or alter reality to one’s favor.

Audio deepfakes in particular are one of the newest developments. This type of cyber-attack relies on a machine learning algorithm to mimic the target’s voice with great precision.

Video deepfakes are better understood and are more widely used in today’s society. So far, these video-based attacks have mainly been used by amateur hobbyists to swap out the faces of celebrity on adult entertainment websites, or making politicians say completely different things than they originally said.

What Do People Use Deepfake Attacks For?

Deepfake attacks and machine learning technology could be applied in a wide range of situations, targeting a broad range of individuals or organizations. In particular, the global political system and large multinational corporations could be impacted the most by these types of malicious cyber-attacks.

On a smaller societal scale, deepfake attacks could be used to target individuals. People could use AI deepfakes to destroy someone’s reputation and spread falsehoods in their direct social environment. Alternatively, criminals could use deepfake imagery or audio to blackmail others into paying ransom money to save their reputation.

Deepfakes As A Manipulation Tool

In any case, deepfake attacks are a means to an end for the (often anonymous) attackers. They are a mere tool in a wider psychological game to alter reality and ‘bend’ the facts into ‘new facts’. Obviously, with the intent to further the agenda of the attacker(s).

Where in the old days a malicious actor would use propaganda to introduce a new narrative into society, or spread simple lies in the media, the new deepfake ‘propaganda’ would actually be indistinguishable from reality itself. That is the real threat deepfake attacks could pose to the future of our societies.

Physical proximity is no longer required to carry out an attack – after all – the internet enables any device with a simple Wi-Fi connection and a smartphone to participate in a large-scale bot attack. We already see this in the current political world: Russia actively using social media bots to change narratives in American politics. Deepfake videos and audios would be fuel to the fire of such mass-scale manipulation.

Examples Of Deepfake Attack Use Cases

One could imagine several common scenario’s to unfold, once AI-manipulated video and audio becomes more mainstream. While (thankfully) still a theoretical example in most cases, it is not hard to image one of the following malicious use cases for deepfakes.

Political Manipulation

One of the obvious dangers of narrative-manipulation takes place in the political arena. The application of deepfakes for political isn’t even contested anymore: it is actively being used around the globe today.

The first signs of deepfake use in the political election landscape is already being reported in India, where a prominent party leader was shown criticizing the current government. The faked video quickly went viral on Whatsapp and other social media, and the damage was already done. The public opinion had shifted.

It doesn’t take too much imagination to see a ‘fake news’ attack to defend the very person that coined the term: the second term of President Trump could very well be manipulated using deepfake technology. If it happens in India, it can happen in the United States – a country where political attack ads are the norm. It would be only a matter of time before a presidential candidate ‘accidentally’ uses a deepfake video in their attack ads.

Company Fraud

One of the main applications of audio manipulation is used by criminals for fraudulent activities. Fake AI phone calls are already being used to trick companies into depositing sums of money.

The manipulative phone call is simple: use an AI to generate the voice of a major business partner and convince the company contact to deposit large sums of money. The wire transfer is believed to happen confidentially between two already established business partners, but in reality, the money is being wired to deepfake scam artists.

It’s just one of many examples on duping innocent business owners out of their funds. Fake phone calls could also target customers instead of companies – pretend to be a company owner and ask for an additional deposit from the customer for any reason the scam caller can come up with. It doesn’t take much imagination to see the possibilities in this field alone are almost endless.

Reputation Damage

Another deepfake attack method that could take many different forms, is the intent to damage someone’s reputation. One obvious example includes the use of deepnudes: pasting someone’s head onto an adult performer and spreading the video among friends and relatives.

There are plenty of images of celebrity deepfakes out there already, so there’s no reason someone could make AI deepfakes more accessible to a wider audience.

Wait, hold on, there’s an app for that.

You can now ruin the life of any woman with an app by ‘deepfaking ‘ their head onto a nude body, within mere seconds. What a ‘fun’ time to be alive, isn’t it? In the hands of the wrong person, an app like this could easily be used for extortion purposes, without the criminal ever having to physically come into contact with their victims.

So if you have pictures of yourself out there on the internet, you’re basically already a target for reputation damage criminals. While still largely uncharted territory, the tools are widely accessible to anyone with motives that aren’t exactly within the boundaries of the law.

Combating Deepfake Attacks

While the large-scale impact of deepfake technology remains a thing of the (near) future, the first ripples of political, business, and individual damages are already starting to show. In a world where criminals (or actors with an alternate agenda) have access to yet another powerful manipulation tool, it will become harder and harder to distinguish reality from falsehoods.

Luckily, the technological world is also developing counter-measures, such as deepfake detection software. Despite promising efforts in this field, it is likely that software engineers will always be catching up to the latest and most advanced deepfake manipulation.

Some of the top AI researches already claim “being outgunned” by the overwhelmingly fast advancement of deep learning and software. Other sources claim “problematic” advancement of deepfake counters.

So, what does the future of AI tech have in store for us? The true extent of this inevitable deepfake avalanche is yet to be seen. It’s safe to say the increasingly rapid development of machine learning tech will have much greater implications on our societies than we currently think. We are truly living in a ‘calm before the AI storm’-era. You have been warned.

Post a Comment