Will Deepfakes Ruin The World?
In a world where deepfakes are mainly used to fulfill people’s dirty little fantasies, a dystopian future where governments abuse deepfake technology for their own benefit seems unlikely.
Let’s be real, photographs and videos have been doctored for as long as they exist.
But deepfakes are different. It’s in a completely different ballpark altogether. It’s not even close to photoshop anymore. The alterations of video and image content are becoming so realistic, it is hard to distinguish between what’s real and what’s fake.
In a lot of people’s minds, deepfake imagery is starting to pose a real threat. Not only for individual people, but to society at large. The time has arrived where ‘fake news’ can quickly be altered into ‘real news’, simply by using technology that is available today. And nobody would ever even know.
Deepfakes are here, and governments are aware of the shift. The new technology raises a set of challenging issues for policymakers, the law but also for the technological world itself.
In a world where altered media content is increasingly easier to access, problems start to emerge. Anybody who has access to the internet could theoretically produce their own deepfake content.
That’s fine when it’s done for entertainment purposes.
But when malicious governmental regimes start using it to push their agenda forward, it’s not hard to see that the negative consequences could spiral out of control really quickly. While it hasn’t been a significant force in the governmental sphere yet, Italian politics seem to have already been influenced by it to some level. And they could be a serious problem for the upcoming US elections.
But the deepfake dangers go way beyond politics. When the AI-image processing technique falls into the hands of criminals, making a new identity will become child’s play. Hard to imagine? Think again.
These people do not exist.
Sure, sometimes the AI still makes a slight error. Weirdly shaped ears, only 1 part of glasses visible… The algorithms could still use some work. But improvements in AI are rapidly changing the landscape. It will soon make techniques like face-swapping a piece of cake for anyone with a simple internet connection and an app like Zao, which already takes deepfakes to the masses.
Whereas the above is just a website with pictures, it’s also a great showcase of what is already possible. The scary level of realism is exactly what makes deepfake images so dangerous. It’s a tool that could be used for good, but more so for harming other people’s interests.
Companies are already holding back tech from the public out of fear of misuse. One prominent example is OpenAI, which decided not to release an AI system it created, out of fear of misuse. When AI becomes that (potentially) harmful to us, should we really continue on our current path, or take a step back and reconsider what we are doing?
Deepfakes make use of so-called deep learning-technology, which refers to arrangements of algorithms that can learn and make intelligent decisions on their own. The artificial intelligence (AI) responsible will get better as time progresses, at a rate that is faster than humans ever could accomplish.
It’s not a far-fetched fear to think that deepfakes could ruin the world one day.
The possibility of information war is already there. There is a wide range of AI deep-learning systems that could produce a persuasive counterfeit by studying photographs and videos of a target person from multiple angles, and then mimicking its behavior and speech patterns.
The same range of technologies can also be applied to sounds, and in particular, voice alteration. For example, the tech has already enabled AI to exactly mimic the voice of podcast-celebrity Joe Rogan. You don’t need to take some DMT (apologies for the Rogan-joke there) to see that these were obvious fakes. But it’s not far from being so incredibly indistinguishable from the real thing, it will be hard to see which things Rogan has actually said, and which ones are created by a super-clever AI algorithm. Today these alterations are used for entertainment purposes, tomorrow it’s to further an agenda by people that have different intentions for society than those of interest by the public.
The crucial question then becomes: is there a way to detect the fact that video or images are altered in any way? Detecting deepfakes would be crucial in a future society where people are avalanched by manipulated media content.
Luckily, current research on two types of detection techniques is promising:
- Preventing deepfakes: Auto-adding a digital noise to image uploads, in order to prevent AI-algorithms to pick an image up for deepfake use. This way, users can choose to protect their own face from deepfake use. An extra upload filter could automatically apply the digital noise to the image in order to make it near-impossible for AI to use images for automatic alterations.
- Deepfake detection: Obviously, not every image will have a preventive layer of digital noise applied to them. For the images already out there, companies like Google are developing ways to detect created deepfake images and video. The research is still in the very early stages, but if anyone would be able to make a meaningful data-based contribution it would be Google.
The challenge, however, remains extremely difficult. Deepfake detection technology is still miles behind the reality of deepfake tech development. Many types of open-source deepfake generation methods have emerged since 2017. With every method added by the public, the side fighting the development must catch up. It will be an interesting cat-and-mouse game that will unfold in front of us in the years and decades to come. In the end, it is up to us, the people and society at large, how we wish to use the media manipulation tools that are being created.