A series of deep-fake digital creations that appeared to feature Indian actors recently went viral, showing how greatly this threat has grown. In one, viewers would think they’re watching actor Rashmika Mandanna, but it’s only her face on someone else’s body in an undignified clip that’s offensive not just to her, but to all. Then there was an image meant to be of Katrina Kaif, another objectionable fake. Kajol has been a victim too of artificial imagery that could fool anyone. These are not clumsy mock-ups. They are part of the online wonderland being ushered in by advances in artificial intelligence (AI). Prime Minister Narendra Modi has been targeted as well, as he revealed some days ago while noting its dangers; a fake video depicted him performing the garba, a folk dance. That it was brought up by him at this week’s virtual G20 summit is a sign of how seriously the risks are being taken at top-most levels. What was once an almost academic worry among the few who were up-to-date with technology now needs to become a buzz of the masses. With general elections not too far away, it is crucial that this happens fast.
As AI is a double-edged sword, with much to offer that’s novel even as it endangers us in new ways, it is a clear candidate for a strict set of safety rules. Left to itself, the AI market cannot be relied upon to restrain misuse, at least not in good time. Policymakers are seized of the needful. A Union minister recently hinted at legislation underway to regulate AI. The Bletchley Declaration made in the UK jointly by many countries earlier this month noted risks “stemming from the capability to manipulate content or generate deceptive content,” and called for global action to address them. Yet, it will be a while before we get a regulatory frame in place that can contain the menace of deep-fakes. Not only does their proliferation tell a story, so does the speed of their ascent up the learning curve (‘deep learning’ is what generative AI tools do). While we have faced fake news before, we are now at an inflection point. So life-like is the output of some tools in use, we cannot be sure anymore if anything seen on a screen is real. For those who spot themselves to their horror in such clips, some of which are sexually explicit, it is unclear what remedial action could be taken that might help undo the damage. What goes online is hard to scrub; such is the nature of cyberspace. The famous at least have access to media platforms that help them call out fakes, so that audiences are not misled. Ordinary folks do not have this option and may have to suffer distorted portrayals without a chance to be heard out.
The macro level danger that looms is to democracy. Our political system makes leadership a matter of public choice, but its basic logic would be impaired if truth and falsehood get difficult for voters to tell apart. With so much opinion shaped by what pops onto social media feeds on hand-held screens, deep-fakes that make the rounds of our political arena could distort perceptions and outcomes. Since all politicians face more or less the same risk of mala fide mis-portrayal, they might as well make a concerted effort to caution electorates about how realistic today’s fake imagery has become. It’s not an easy message to get out. For truth to prevail, though, we must. We humans have evolved to trust our eyes. But if it’s on a screen, no matter how real it looks, seeing is no longer believing. Let’s spread the word.