How To Know If An Image Is Real Or A Virtual Deepfake?

Image Real Or Deepfake

How To Know If An Image Is Real Or A Virtual Deepfake?

Deepfakes are meant to be indistinguishable from the real thing. That’s how they are designed. It’s their purpose. When it comes to visually being able to tell if an image is real or fake, human eyes often arent enough.

It is the exact reason many people fear the impact of deepfake technology on society: our eyes simply aren’t enough in this battle against the truth. Some experts dare to say that deepfake detection is a losing battle where Artificial Intelligence (AI) will always be several steps ahead of us. But for now, we can safely say ‘most’ deepfakes can definitively be placed into the broad categories of ‘real’ or ‘fake’.

Despite the bleak prospects of rapidly improving deepfake tech, many multinational companies in Silicon Valley and beyond are still scrambling to tackle the problem of ‘deepfakes’ to the best of their abilities. With specialized software and algorithms, we might just have a fighting chance. So let’s find out how big tech is currently trying to fight this battle. Let’s see how to detect a deepfake image.

Deepfake Detection Software

When the human eye is no longer sufficient for detecting whether or not an image, audio file, or video file is falsified or not, we must focus our efforts on the very thing that is trying to trick us. A snake venom antidote analogy would fit perfectly in this context, since we actually need to start using deep learning AI to fight deep learning AI.

Winning the war against deepfake manipulation can only be done using targeted, highly-developed deepfake detection software. As of today, the efforts are focused on feeding large amounts of data to specifically developed detection tools. The more data is fed, the better the algorithms are trained to differentiate between real and fake media content.

It’s a bit like those ‘verify you are human’ image grids Google presents you with every now and then. But on a much larger scale and much more rapidly. The software isn’t picking out road crossings, cars or traffic lights, but much more refined details that would otherwise go unnoticed by humans using the naked eye.

Real Or Fake: Who Is To Say?

There are many examples of deepfake images that were perceived to be real by most people. Machine learning can now even create completely new people from an aggregation of existing pictures, which is displayed excellently by the now infamous website ‘This Person Does Not Exist’. No matter how often you refresh the website, there will always be a new image of a non-existent person that looks extremely real.

Sure, some of the deepfakes are still obviously fake, tiny mishaps are easily spotted by humans, who have evolved over hundreds of thousands of years to detect differences in human facial features. This so-called human facial recognition superpower is perhaps the reason us humans won a few extra years in the battle for the perfect deepfake images.

However, the most impressive results have been booked in the world of moving images. Deepfake videos have taken the internet by storm already. It’s merely a preview of what is to come, as the video-altering technology is just taking its first baby steps. It won’t take decades before we can make every celebrity, politician and even the average Joe say anything, at any time.

Some would say the battle against ‘knowing whether an image or video is real or fake’ is already lost. AI is simply outpacing the human abilities, making our counter-efforts in the first of deepfake detection software practically useless.

Fighting A Losing Battle Against AI

It is not so much if we won’t be able to detect deepfakes anymore at some point, but when this turning point will occur. Experts in the field of machine learning are already warning the world of the ‘impending doom’ that is coming our way. The race to counter deepfake generating AI with software is already on the losing side. Three threats are now under scrutiny that need to be addressed as a serious concern in our near future, when it comes to deepfake detection software:

  1. Software can’t tell us what has to be taken down: The value judgements of what is harmful and what is harmless in the world of deepfake images are controlled by humans. Machines cannot decide for themselves what has to be removed from the media and / or internet.
  2. Software won’t help the people that need it most: Marginalized groups and minorities that could be the first parts of society consistently being hit by this new tech, won’t have the first access to counter these attacks. These will be limited to large corporations in Silicon Valley, as well as governmental organizations and other organizations that have influence.
  3. Software cannot help people that already fell victim: Once something is out on the worldwide web, it can never be contained or taken down. It will simply keep popping up. We have seen this disturbing phenomenon time and time again – people will simply download that compromising image or video and republish it at a later point in time. If you have been targeted by a deepfaked image concerning your organization or you as an individual, the damage has already been done and cannot possibly be undone.

Thriving In A World Of Deepfakes

So, the deepfakes are coming and they are coming for all of us. How to deal with this new realization as an average inhabitant of planet Earth? Are there things we can do to protect ourselves from what is to come? Here are some simple and practical tips for dealing with deepfake images, both right now and into the future:

Always question remarkable claims: Odds are that manipulation is happening when extreme claims are made. This includes the framing of media stories with regards to a targeted group of people. Keep questioning what media organizations and people on the internet are telling you, and draw your own conclusions based on facts.

Verify a claim with multiple credible sources: Through the verification of news stories with several different sources, you’ll know if manipulation is happening straight away. The most effective measure is to take two (or more) media sources that are complete political opposites. For example, if more right-wing Fox News and more left-wing CNN are both reporting the same aspects of the same story, the overlapping statements are more likely to be true.

Protect your personal online privacy: Remove social media, don’t post videos, images or audio of yourself on the web, don’t leave traces like IP addresses and similar usernames, e-mail addresses or similar identifiable information on the internet. The more invisible you are online (but also offline), the less likely that you’ll be a target of malicious deepfake creators in a future where deepfake tech will be the norm.

Using common sense and not falling for mass-hysteria will be difficult for the masses, but on the individual level a lot of self-protective measured can be taken. Avoid using social media and posting personal information on the internet, as these are often required as ammunition for AI deepfakes.

The measures to protect your privacy include data you keep offline. Images or videos on your phone, tablet, laptop, computer or other devices can be hacked and taken advantage of by others. Mask your IP address, use privacy-friendly internet browsing software and keep out of trouble.

Post a Comment