The 5 Best Deepfake Detection Methods To Date
The race for deepfake detection is in full swing, as the technology to create deepfake videos, images and audio is rapidly improving. With efforts from large tech companies like Google, Microsoft and Facebook under development, and it’s still an even battle – for now.
There will likely be a time where deepfake detection is going to lose to the most sophisticated Artificial Intelligence (AI) algorithms and deepfake video creation software.
But let’s focus on the here and now – what can we do to find out if a piece of media content has been tampered with? What are the best methods to detect deepfakes using anti-deepfake detection software? This article will explore four different key elements software tools look at to identify if images and videos are realistic, or if they have been manipulated by deepfake generating software somewhere in the past. This already implies that the core question can be answered positively.
Can Deepfakes Be Detected?
Yes, deepfakes can be detected using deepfake detection software, which focuses on several facial features, movements and environmental changes in deepfake video content. Slight clues will give away if a piece of media content is original, or if it has been changed using deepfake generating software at some point in time.
Deepfake detection as a strategy is ever-evolving, as the technology to create deepfake videos and images is also changing over time. Software using in analysis efforts will however always be one or more steps behind the current state of affairs, as it is often based on the current technological status quo of what deepfake generating software can do.
The more realistic deepfake generators will get, the harder it will become for these counter-measures to be effective and the more time it will take to develop a tool that can catch up with the technological advancements at any given time.
It should also be noted that even the deepfake detection tools of today aren’t completely foolproof. For example, a relatively new tool created by researchers from the USC Information Sciences Institute (USC ISI) that focuses on subtle face and head movements and artifacts in files, is about 96% accurate in its efforts to detect deepfake versions of videos. It is indicative of what we could expect from future tools as well, as some well-made deepfake videos will inevitably go undetected, even with the most sophisticated software tools out there at any given moment.
While the new tool from USC ISI focuses on subtle head movements and tiny clues inside the files themselves, there are more methods that will give away a video falls into the category of videos that are faked by AI and machine learning algorithms. Let’s explore the four main methods that are currently used by tools such as the aforementioned one.
Deepfake Detection Methods
There are several subtle (or more obvious) giveaways that can be exploited by anti-deepfake software tools, in an effort to detect the deepfake videos that are being circulated on social media and traditional media channels. These mostly focus on the face, as those are the hardest to realistically model. However, also other aspects of the video are relevant for detection tools. The main ones include:
- Changes or alternations in the colors of the face
- Reduction in the amount of times a person will blink with their eyes
- The changes in lighting that can be slightly off at some point in the video
- The audio sync between the lips and the original audio of the video itself
- A distinct blurriness in the area where the face will meet with the neck and / or hair
There are more identification methods, but these are the ones that are utilized the most at the time of writing. Remember that the landscape of anti-deepfake tools is evolving as rapidly as the technological advancements of the deepfake video tools themselves, so the information might get outdated at a point in the future where the advancement has moved beyond these aspects.
Generally, however, the facial features will always be a major part of the deepfake video detection efforts, as this is the main part that is manipulated in videos to layer another person’s face over the one in the original video. Do keep this in mind when going over the main detection methods being used today, which often also focus on the areas that are manipulated the most (i.e. around the face).
One of the first major ‘dead giveaways’ that are exploited by deepfake detection software tools, is the slight discolorations of the face relative to the original light. The subtle differences created by the AI will be near-perfect, but not perfect enough to go completely undetected.
Face discolorations might deceive the human eye, especially when the video quality has worn down due to a multitude of rendering sessions – videos on the web are often re-uploaded more than once and the quality will dwindle each and every time someone takes the time to re-upload a video. The slight reduction in quality will make it slightly more difficult each time to pick out these facial changes, which are already minor to begin with.
Reduced blinking with the eyes
Another major giveaway occurs when the person blinks with their eyes, according to researchers from Cornell University. The study focused on exposing AI generated fake face videos through eye blinking. And the findings were quite clear: the eyes are indeed a giveaway that can be a factor of interest to see if the video was faked or not.
As the quality of deepfake videos improves, this is something that will likely reduce and eventually become irrelevant, as it is quite an easy detection method in general. The neural networks that are used to generate the fake faces on top of the original faces, will eventually start incorporating this psychological signal into the video creation process more realistically.
The study from Cornell University researchers used several sets of eye-blinking detection datasets as the core part of their research. The results were promising, as the current generation of deepfake neural network software tools is still insufficient in the incorporation of these realistic eye-blinking processes into the final result of most deepfake video content.
Lighting that isn’t quite right
No, it’s not ghosts or spirits. It’s actually the neural networks messing up tiny details of the video results after the processing of the source material! The small changes over time will help make it clearer quite rapidly when a video can be considered a deepfake and when it’s actually an original video.
Finding the flaws in lighting in sequences, not just certain frames is – for now – still a good way of analyzing for deepfake content. Because single frames detection can be often circumvented by the AI-generated content, while the sequences across multiple frames over time will reveal more information of the ‘realness’ of a piece of video content.
And again, the differences in light will most notably be visible in and around the facial areas, where the most changes are being implemented during the creation of deepfake content. In the areas of the face where the AI makes changes, tiny errors will be detectable. However, the longer the rendering and calculation process of the deepfake content, the harder it will become to find where these slight difference sequences occur within the video itself. Lucky for us, there are more clues than just these to find out if content can be considered real or fake.
Badly synced audio and video
While there are tech firms that are already starting to crack the code on the sync between the audio and the lips when creating a new deepfake video, most AI tools available today will still have a pretty bad sync at some point in the video. Inevitably, these will be picked up by people watching the video, or when they are minute in terms of time difference, a software tool might detect these just as easily.
Creating a convincing lip-sync is actually harder than it sounds, which is good for the anti-deepfake effort that is globally being enrolled by many dozens of large tech firms. Especially videos in different languages encounter a lot of problems in this regard, as the facial movements will often differ quite a bit from the other language that’s being spoken in the original video content.
Here we can also see that the mouth area, and in particular the movement of the mouth over time, is another great indicator of the ‘realness’ of videos. If you wish to determine if videos are realistic of faked, it’s becoming increasingly clear that it’s the moving parts of the face you’d want to pay the most attention to. This is where the screw-ups by the AI neural networks can be the most notable. Not just in the eyes and mouth however, but also in the barrier area between the face and the body. That’s the last deepfake detection method that we’d like to discuss in this article, as it is another major one that can give away the ‘little secret’ of a faked video that’s created by AI.
Blurriness where the face meets the neck and hair
We already touched upon the issue briefly earlier, but it’s pretty obvious that deepfake videos are often quite blurry. Reducing the sharpness of the video will reduce the odds of detection, it doesn’t take a genius to figure out why. It’s fewer data to be processed and a smaller chance the tiny flaws are eventually detected by the naked eye – it’s as simple as that.
The blurry nature of deepfakes is one of the multiple major flaws that can be used to detect their ‘true nature’. And once more, it is often the face that will quickly start looking the most blurry, out of all the moving parts in a clip. The differences could even be so striking, that the original clip can be extremely sharp, while the deepfake layer can be so blurry it is almost unnoticeable.
Training time will grow exponentially with the sharpness of a clip, so it comes to no surprise that a blur indicates a deepfake video. A blur will reduce the time it takes to create a deepfake, since the GPU needs a lot of processing time and memory capacity to process all the moving parts in the videos it’s being ‘fed’ by the person that wishes to create these faked clips.
The more professional production has already reduced the amount of blur in their deepfakes, however, so sharpness should not be a defining factor for some of the more ‘harder to detect’ fakes that are being created in studios with professional equipment. However, if you’re battling a kid with a laptop and a half-decent Wi-Fi connection, this will be less of an issue. Basic computer equipment isn’t as sophisticated and will leave clues in the blurriness (or sharpness, depending on the perspective you take) of the video creations.
The blur will often reveal its secrets in combination with color differentiations, as well as the aforementioned lighting differences. When looking for blur in combination with these aspects, it will be the mouth that will give away its secrets the quickest.
Deepfake Detection Software: The Counter Battle
With the exploits in mind, a large number of tech firms and individual enthusiasts have taken it upon themselves to battle against the rise of deepfake video content. Initiatives are emerging around the globe in an effort to curb the negative side-effects of deepfake tech.
With countless deepfake detection software initiatives in the works, the world is trying to keep up with the equally fast-rising world of AI deepfake generation from start-ups, digital open-source projects, as well as (inevitably) governmental efforts to utilize deepfakes as a propaganda or defense tool in an information war.
One of the largest efforts have been set up by a conglomeration of tech firms from Silicon Valley, including the likes of Microsoft, Amazon and Facebook. They started a so-called ‘Deepfake Detection Challenge’ or DFDC (see the official website here), in which anyone can pitch their idea or tool and contribute to the effort against deepfake production around the globe.
The DFDC says in the mission statement that it “invites people around the world to build innovative new technologies that can help detect deepfakes and manipulated media”. In the meantime, the challenge has ended and the $1 million USD finale prize is being distributed to the winning team. At the time of writing, the winner has not yet been announced and the effort is still ongoing for a few more days. If you’re interested, feel free to check out more information on the challenge here, which is also the most likely place where the final results will be shared.
It’s not unlikely more of these efforts will be launched once the adoption of deepfake tech around the globe rises. Especially when the deepfake tech is becoming harder and harder to detect and beat, the governments and tech firms will start ramping up their efforts to fight the malicious side-effects of this new AI-based technology. In the meantime, anti-deepfake detection tools can, of course, be developed by anyone with the skills and know-how. There is likely a massive requests for such software tools in the current world, and this demand is only going to increase with the rise of deepfake use in the (near) future.
Stopping Deepfakes: A Race That Never Ends
Some media channels are already starting to voice the worries of many AI experts around the world: despite our best efforts, chances are quite serious we’ve created a monster that’s outside of our control. Deepfake detection algorithms are likely to never be enough, regardless of the amount of resources we collectively dump into them.
And even if that would turn out to not be the case – in an optimistic scenario where deepfake software hits a wall that’s hard to break – the deepfake detection tools will likely need to be renewed non-stop. The race will never end as long as these tools are out there on the internet for anyone to use.
Perhaps societies should start looking for mitigation efforts, rather than fighting the threat face-on. Or choose a path in which both efforts are being implemented: educating the broader public about the dangers of fake media content (and how to be critical of them), as well as the constant development of the deepfake detection tools that use the methods discussed in this article.
Whatever our course of action, it is clear that the challenge will be hard and it’s going to be one that will take many decades. As long as the globe is as interconnected as it is right now, there is no stopping this new tech in its tracks. Sadly, it’s the new reality we have to face and we as a global society need to learn to live with.