0

0

0

0

0

0

0

0

0

This article is more than 3 year old.

Deep fakes just complicated the fake news problem

Mini

The larger the number of audio-video material accessed and analysed, the better the deep fake.

Deep fakes just complicated the fake news problem
Late last year an anonymous Reddit user- @deepfakes did something that got him/her banned from the platform that prides him/herself on free speech. He (or she) released a set of porn videos with the faces of celebrities superimposed on the face of the porn stars.
In addition to this, a small desktop application Fake App was also released that would allow other users to do the same. Unlike earlier attempts to create such videos, these videos were created using sophisticated machine learning techniques that parsed a large pool of pre-existing genuine images and videos to create the fake which looked almost real.
Artificial intelligence created pornography seemed to be the next big thing, with consequences in the real world. Suddenly revenge pornography seemed possible, so did the ability of famous people turning up in highly compromising videos. While people grappled with the ethics of this new development, bang came the extension of deep fakes in another area– news.
Deep fake is the term used to describe the phenomenon of creating seemingly real looking audio-visual material by manipulating pre-existing audio-visual material with highly sophisticated AI tools that are freely available on the internet. The larger the number of audio-video material accessed and analysed, the better the deep fake. Right now, it works best with celebrities or those with a large audio-visual digital footprint such as those who post lots of selfies or videos of themselves.
Deep fakes or their underlying technology Generative Adversarial Networks (GAN)  has applications everywhere from the medical research to the entertainment industry. And, where a technology has applications, it also has the tendency to be misused. In increasingly polarised societies across the world, the use of deep fakes can have dramatic consequences.
In an Indian context, imagine a deep fake of one popular leader calling for riots, or secession or bringing someone’s head on a platter. The consequences would be devastating. The world of deep fakes is the next frontier in fighting fake news. Already it has been seen in action.
Earlier this year, there was a deep fake of former US President Barack Obama saying fairly unparliamentary things about his successor President Donald Trump. There was a deep fake of President Trump seeming to intervene in the Belgian elections. Both were put out to send a message on the ever-increasing problem of deep fakes, but they still managed to fool enough people. Part of the problem is the fact that human beings believe what they want to believe.
The issue of fake news is gaining more and more importance in a world that is increasingly dominated by digital technologies. As more people gain access, the ease with which fakes permeate their ecosystem is huge.
India has already seen this in action with several lynchings following rumours spread on WhatsApp about suspected child traffickers. Fake news busters are working beyond capacity to debunk some of these claims, but the fakers have a first mover advantage. Now, with sophisticated machine learning solutions on hand, as free ware, the propensity of the fake news cottage industry to churn out more realistic fakes will be even higher.
With deep fake videos, especially those built with thousands of hours of footage of someone famous – the ability to confuse the world even more, is huge. Imagine a deep fake where the President of the United States calls for a war, or the Premier of China calls for the invasion of Taiwan, or an Indian leader calls for a stock market shut down. By the time it was discovered as fake, the damage to nations, societies, and economies would be huge. At an individual level, reputations can be ruined if deep fakes of famous people doing not so savoury things hit the interwebs.
Across the world there is diminishing faith in the media. As people move away from mainstream media to platforms that reflect their own biases, the problem of fakes is going to exacerbate. Right now, there seems to be no solution available, except one- mass media literacy classes for the populations of various countries, to not get impacted by fake news. For where people can get fooled by a leader calling for a riot, they can also be fooled by someone else demanding money from them.
As India approaches the general elections, we will have to be extra alert for not just fake news, but deep fakes which are difficult to identify. As the IT teams of major political parties gear up their cyber armies to go into battle, disinformation is going to be a large part of the arsenal. And where there is disinformation, there will be fake news. Deep fake has just complicated the whole battle ground.
next story