
Are Deepfakes Ethical In A World Of Media Manipulation?
If you have to trust most media outlets, deepfakes are the root of all evil. A political threat, revenge deepnudes, randsomfakes, and quite literally the place where the truth goes to die.
The reality of things is a lot more nuanced, as is often the case with issues that impact societies in many different possible ways. The ethics of deepfakes aren’t always black and white. Sometimes they are used for good, sometimes they are used for (very) bad.
Putting the face of one celebrity over that of another is arguably a net positive, as it provides entertainment to many millions on the internet. YouTube videos with deepfakes are going viral left and right, and nobody ever got hurt by it. So it cannot be that deepfakes are inherently bad, right? We set out to explore the world of deepfakes and ethics, and here are the most remarkable things we found along the way.
Are Deepfakes Ethical?
Yes and no. Deepfakes are ethical because they can be a tool for technological and entertainment purposes. Deepfakes are not ethical because they have the potential to be used as a weapon through the spread of misinformation.
In a nutshell, that is the basic answer that most accurately describes the status quo on the topic. Obviously, the above statements are extremely over-simplified and should be look at critically at all times.
It can be argued that the development of deepfake videos, images and audio should be avoided at all costs, due to the many applications for criminal activities, as well as their malicious use to manipulate the public (or personal) opinion of an individual or organization.
However, this goes against the reasoning that the tools used in deepfake media development can be an essential tool in the world of entertainment and technology.
Regulating Deepfakes: Ethical Considerations
We can safely take it as a fact that deepfake technologies are here to stay. This has some direct legal and ethical consequences which should be dealt with sooner rather than later by governments.
How these measures should be shaped, largely will depend on how well democratic governments around the globe are equipped to combine technical, socio-political and regulatory aspects of the problem, to successfully deal with the challenges deepfakes pose to societies. Direct action should be taken in the following places:
- Immediately deterring the creation of socially harmful deepfakes
- Making sure that lawmakers and regulators are knowledgeable about the technical details and implications of deepfake technology
- The availability of accurate resources to deal with the harmful implications of deepfake technologies (i.e. anti-misinformation tools)
- Economically encouraging anti-deepfake measures that target the harmful aspects of deepfakes, as well as legally deterring those harmful aspects at the same time
The question is how fast regulators and other governmental bodies are able to respond to the rapidly developing world of deepfake creation. Perhaps, in the light of ethics, one should not look at the government for quick solutions, but encourage large influential technological, social media and media corporations to lead the fight where possible.
The way in which governmental bodies can attack the adverse effects of emerging technologies can be guided in five different regulatory ways, but these institutions will likely move a lot slower due to obvious bureaucratic reasons:
- Adaptive regulation: Close monitoring of technological developments and adapting the regulating strategies accordingly.
- Regulatory sandboxes: Immediately implement real-world experiments and testing of new approaches through the acceleration of e.g. anti-deepfake technology.
- Outcome-based regulation: Only focus on outcomes and performance of policies and replace (based on expert advice) where performance is lacking.
- Risk-weighted regulation: Chop up policies into sectors and accelerate the high-risk regions of those regulations. Prioritize where action is needed right now.
- Collaborative regulation: Analysis of international efforts, adopt policies that are effective elsewhere. Adapt these policies on the fly for the local situation as soon as possible.
The sets of actions that can be taken by regulators are of major influence to the ways in which deepfake tech will be used. Immediate and powerful countering political influences from deepfakes would be an obvious first step to keep election processes fair and avoid misinformation.
One could think about aggressive fact-checking in e.g. presidential election processes and motivating media outlets to highlight those aspects of the election process where misinformation was used. Measures like these are only there to mitigate in the inevitable influence deepfakes will have in the future of election process of national and international societies.
Greyfakes: The Good And The Bad
Where you can best draw the line as to what can and cannot be allowed is one of the hardest challenges in the entire fight against malicious deepfakes. The term ‘greyfakes’ has been used to pinpoint the blurred line between the positive and negative impacts of deepfake technologies.
Living in a free, open and democratic society means that we cannot simply make all deepfakes illegal. Just like we cannot make cars illegal because of all the road accidents. Every technological achievement comes with negatives, and it is the task of the regulators to remove those adverse effects without limiting the freedom of its citizens. A highly debatable point of view, by the way, because this opinion in itself is inevitably politically charged.
As long as there are positive impacts of deepfake tech, it is unlikely that governments would go to extremes like these to protect ‘the truth’ in the political and media sphere. We can take that much from other technological developments we have seen in the past. A good example is how governments responded to illegal downloads of music when the peer-to-peer software Napster came out. It’s better to provide alternatives, than to shut down an inevitable technological development.
Teaching The Public To Be Critical
Even with strong regulatory measures, ethical or not, we as a global society have to deal with the fact that deepfakes are here. And they are here to stay. Once the cat is out of the box, it becomes very difficult (if not impossible) to put it back in there.
This is exactly why the best thing societies can likely do, is to mitigate and reduce the effects of manipulated media content created by deepfake technology. The best effort to be made is perhaps public education and the spread of awareness.
Many people are easily swayed by statements on the internet, but few stop to think that some statements or media content might not be based on truth at all. Providing the public with the educational tools to deal with the new reality of deepfakes will help them be more critical of their surroundings.
If there is one thing you should take away from this article, it is that action should be taken now. Not only to regulate the adverse effects of socially harmful deepfakes, but also informing the public that these technologies exist, how to deal with them, and how to keep questioning everything and anything around them at all times.