7 Reasons why Deep-Fake Technology is the Worst Invention

Daily writing prompt
What technology would you be better off without, why?

It is not an easy task to decide which technology we would be better off without, as most inventions come with their advantages and disadvantages. After much deliberation, I finally decided that deepfake technology is the most harmful modern invention. 

It could be argued that the use of deepfakes can be fair or justified in certain contexts. For instance, when they are created for parody and it is disclosed that the content is AI-generated, their use may be acceptable. Similarly, if an individual consents to the use of their likeness in media and benefits from its application, such usage can also be considered fair.

However, the immense potential for misuse makes it a singularly dangerous aspect of generative AI, with far-reaching implications for privacy, trust, and societal integrity.

I wrote a previous blog on the power of honesty to change and transform our lives. Deepfake technology, however, is the complete opposite of truthfulness and integrity and thus runs in direct opposition to such core values.

This blog reveals seven important questions that anyone concerned with the regulation of deepfakes needs to consider.

How can deepfakes spread lies and make it hard to trust what we see online?

Deepfakes quite simply are created to spread lies. People who create deepfakes can take someone’s likeness, and create a video, image, or audio that does not truly represent someone.

Anecdotally, whenever I look at a video from someone who is commonly a target of deepfakes such as Elon Musk, Joe Rogan, Di Caprio, etc, I have to ensure that when I watch videos, I verify their origins, think about what they are saying, and look for clues that indicate it might be AI-generated.

Naturally, as more and more content is deep-faked, the natural response will be that we start to distrust the internet as a source of information completely. Photographic, video, and audial information will lose some of its value.

How do deepfakes invade people’s privacy or harm their reputations?

Deepfakes often violate the notion of consent completely.

Most deepfakes are pornographic and involve someone taking a photo of an actress, star, girlfriend, ex-girlfriend, etc and using generative AI to place the victim into an existing video. As this technology matures, it will become harder to tell whether the video is real, or fake, and there will be fewer limits on what could be generated. The most deepfaked subjects are South Korean singers and actresses who accounted for 53% of all deepfake pornography victims in 2023.

Long-term, the victims of deepfakes might have their reputations harmed significantly. For instance, there was the case of Alex Wolfe, who won Young Composer of the Year in 2012 and was convicted of selling deepfakes of 15 women by utilising Reddit and X.

Could deepfakes be used to steal identities or trick people into giving away sensitive information?

Perhaps the most dangerous use of deepfakes is in scams. Any new technology is often used in criminal activity, and deepfakes are no exception.

Deepfaked celebrities, or even social media accounts, take advantage of someone’s brand to access sensitive information and steal money. For instance, Elon Musk has mentioned before, in real life, that he bought the Dogecoin cryptocurrency – mainly because it would be ironic if the coin designed to satire cryptocurrency became the world’s biggest crypto. It is no surprise, therefore, that the likeness of Elon is used often in cryptocurrency scams.

However, perhaps more worryingly, deepfakes, if used in a sophisticated way, could even potentially be used to steal someone’s identity.

Imagine this scenario, you receive a phone call from your daughter and she says that she needs £3000 to make rent. You think it’s her. It’s her number, her voice, and potentially even her face on a video call. She sounds so desperate and genuine.

You might fall for it and send money via the link she sends to you on WhatsApp.

Unknown to you, however, your daughter’s likeness has been stolen and her number has been cloned by a criminal. You are 3000 pounds down, and wondering how you could have fallen for it.

Who knows how far this deception could go in the future? Just thinking about the possibilities is terrifying.

How might deepfakes make political and social tensions worse?

Deepfakes are particularly dangerous for the political class. What politicians say and do can have a huge impact on campaigns. Deepfakes can be used by opposing political forces to make people turn against a candidate or party.

Deepfakes pose a real danger to the notion of representative democracy as they unfairly twist our narratives and perceptions. This fear about deepfakes and their potential to affect election results is echoed by the British public – almost 90% of the UK public are worried about this risk.

Can deepfakes allow people to deny real evidence?

A fringe risk of deepfakes is that a deepfake can be used as a defence to deny something that happened. If someone is pictured doing something illegal, then an obvious way to defend yourself is to state that the video or image is fake.

It could also be possible for people to create fake evidence that could be used as a defence in a court of law. If these AI images, audio or videos are created convincingly, and there is no way to verify them to be fake or real, it could pose a real issue for everyone. This poses a critical challenge for courts, as juries could be misled, and the ability to discern truth from deception becomes increasingly difficult, threatening justice for all.

Why is it so hard to control or stop the misuse of deepfake technology?

There are two reasons why it is hard to control and stop the misuse of deepfake technology.

The first is technology moves at such a frightening pace that it is difficult to prevent its misuse. The second is that it has such a low barrier to entry now, that authorities, social media companies and other gatekeepers will struggle to control its misuse. It used to be a technological challenge to deepfake anything.

Think about how much skill is required to Photoshop and edit videos skillfully – it would take a professional a long time to edit something convincingly – let alone an amateur. In this AI age, however, it is really easy to use these tools and therefore, there will be almost no limits on how much deepfake content is out there.

How can we protect ourselves from deepfakes?

To protect ourselves from deepfakes currently, we need to be aware of the technology’s existence and stay aware of its limitations. There will be some visual clues that indicate that a deepfake has been created, and these subtle clues need to be understood.

Physical and digital watermarks can also be implemented and should be mandated by regulators to ensure accountability and authenticity. I also believe that this content should be traceable to the creator as this will make people extra-cautious before creating and disseminating harmful content.

Conclusion

In conclusion, deepfake technology symbolises some of the most distressing developments of the modern era. While it holds potential for some valid purposes, such as entertainment and parody, its capacity for harm far outweighs its benefits.

From eroding trust in media and exploiting privacy to creating risks in legal, political, and personal spheres, deepfakes challenge our ability to discern truth from fiction.

The rapid evolution of this technology demands urgent action from regulators, tech companies, and individuals alike to mitigate its misuse. Only by fostering awareness, implementing safeguards like watermarks, and holding creators accountable can we hope to navigate the ethical and societal challenges posed by this deeply unsettling innovation.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top