Be very afraid of the dangers of ‘deepfake’ technologies

The Lok Sabha elections are upon us and we will soon be inundated on our social media apps by Photoshopped images of politicians showing them doing something silly or despicable. None of us will be spared. We have enough gullible friends and uncles who will forward them to us. Some of these images will go viral.

A far more sinister technology is looming. This is “deepfake” technology, driven by Artificial Intelligence (AI), which produces videos that have real people saying and doing fictitious things. They are very difficult to detect.


What is it?

  • One has to collect a large number of images or videos of a real person and feed them into the program.
  • It does a detailed 3-D modelling of the person’s face, including different expressions, skin texture, creases and wrinkles.
  • One can make the person smile, frown, say anything, and transplant his or her face into an existing video.

Dangers of ‘Deepfake technology’ –

  • The first popular use of deepfakes was in the porn video industry, where porn actors’ faces were replaced with celebrity faces. All such videos that were detected have been taken down. However, the real danger of deepfake technology lies in the areas of justice, news and politics.
  • Deepfake technology in the hands of irresponsible journalists could have deleterious implications. With the explosion of TV news channels and the resulting intense competition, media outlets are more willing than ever to air sensational stuff.
  • Besides, over the last decade, more and more of us are getting our information from social media platforms, where a vast number of users generate relatively unfiltered content.
  • We also tend to align our social media experiences with our own perspectives, which the algorithms are built to encourage, and turn our social media feeds into echo chambers. So we tend to pass along information without bothering to check if it is true, making it appear more credible in the process. Falsehoods are spreading faster than ever before.

The threat –

  • Imagine a deepfake video of a prominent Indian politician ordering the mass slaughter of a community, or the leader of a foreign power ordering a nuclear strike against India. Politicians, especially, are easy bait for deepfakes, as they are often recorded giving speeches while standing stationary at a podium, so only the lip movements have to be synchronised with the fake audio.
  • In fact, crude deepfake software is already available for free download on the internet. In 2016, at an Adobe conference, the American software company unveiled a beta version of Adobe Voco.
  • The company claimed that if the software were fed 20 minutes of anyone speaking, it could generate pitch-perfect audio in that person’s voice according to any script.
  • It cuts the other way too. A politician could actually make a disgustingly communal or inflammatory statement and then claim that it was a fake video.

Way forward –

  • We need to develop an AI to fight this AI. Till then, we should be very afraid.
  • A proper information campaign to fight the menace of ‘deepfake technology’ needs to be driven by the Election Commission with strict action against the users so as to dissuade the political parties to utilise the services of this technology.
  • Ultimately, the consumers need to be made aware of cross-checking the information that they receive online through various sources so as to minimise the risks involved with this menace.


Also Read: All you need to know about MFN status

Leave a Comment

Your email address will not be published. Required fields are marked *