Close your eyes and picture this: a video of a political leader making a sudden policy announcement, an audio note from your boss telling you to urgently transfer money, or a clip of a celebrity saying something outrageous. Now, picture none of it ever happening. That’s not sci-fi anymore; that’s the day-to-day reality of deepfakes.
Deepfakes are AI-generated or heavily manipulated audio-visual content that can mimic real people with realistic accuracy. And they're no longer rare. One recent analysis estimated that deepfake files have exploded from around 500,000 in 2023 to nearly 8 million in 2025, with deepfake-driven fraud attempts rising by about 3,000% in 2023 alone. The question is no longer whether we'll encounter a deepfake, but whether we'll recognise it when we do—and what constant doubt does to our idea of truth.
The pressing question is - Are we legally protected?
And the uncomfortable answer is - Only partially.
A recent India-specific study found that over 75% of Indian respondents had seen deepfakes in the past 12 months, and 38% reported being targeted by a deepfake scam. Globally, one report from Q1 2023 to Q1 2024 showed a 280% year-on-year rise in deepfake-related incidents in India. And the problems here in India stem from multiple causes, like a humongous and rapidly growing digital population, the urban-rural divide, and most of all, varied levels of digital literacy. The emergence of Fintech, which is led by digital platforms and applications, has also been in the quick firing range of the deepfake technology. A real-life example is the National Stock Exchange of India (NSE) and Bombay Stock Exchange (BSE), which have publicly warned investors over deepfake videos of executives giving fake stock tips. Thus, the legal question is not merely academic—real financial, reputational, political, and personal harms are already happening.
So, a question that must pop up in our minds is—is the Indian legal system capable of rolling out regulations that help protect us from this, or is there enough digital infrastructure in the country for the safety of all? There is currently no designated "deepfake statute" in India. This means that regulators and law enforcement must use existing laws, albeit in difficult ways. For example:
The Information Technology Act of 2000 and its revisions (IT Act) cover intermediary responsibility, content removal, and cyber harassment. Certain deepfake injuries (non-consensual intimate pictures, slander, and impersonation) may be protected. The Indian Penal Code, 1860 (IPC), which deals with defamation, identity theft, and fraud, could be applied again. Privacy/data-protection regulations (albeit India's overarching Personal Data Protection Bill is currently being debated) may apply to the misuse of resemblance or biometric identities.
However, researchers argue, and rightfully so, "India has no dedicated legal framework to regulate deepfakes; while India has laws addressing cybercrime, defamation, and data protection, they are not expressly designed for this technology."
Deepfakes have become more than just a technological novelty in India—they’ve turned into a deeply personal threat. With nearly every Indian carrying a smartphone, our faces, voices, and fragments of our lives live online in some form. And that’s exactly what makes deepfakes so dangerous here. In a country where forwarded videos often spread faster than facts and where digital literacy varies, one manipulated clip can ruin reputations, trigger chaos, or even create tensions. The harm isn’t only theoretical. It’s real and often irreversible. India’s social fabric—multilingual, emotional, hyper-connected—makes us uniquely vulnerable to the emotional and psychological damage deepfakes can inflict on either a person or a society.
Legally, however, the system has a lot of catching up to do. India doesn't have a dedicated deepfake law, meaning victims often have to wend their way through a maze of provisions scattered across the IT Act, IPC, and privacy rules—none originally designed with AI-generated impersonations in mind. Enforcement agencies struggle with attribution, platforms struggle with detection, and ordinary people struggle with knowing what is real. The result is a sort of collective uncertainty—if anyone can fake your identity in seconds, what does "proof" even mean anymore? Leaving victims feeling unshielded, this gap pushes them into long legal battles just to have the manipulated content taken down. India is trying—through draft rules, advisories, and court interventions—but until the law evolves as fast as the technology, many citizens will continue to walk through the digital world with the uneasy feeling that even their own image might not fully belong to them.
India is slowly but surely waking up to the deepfake crisis. Realising that synthetic media is no longer something that can be relegated to fringe experimentation, the government has since started taking stronger action. In October 2025, the Ministry of Electronics and Information Technology drafted an amendment to the IT Rules aimed squarely at the deepfake reality by proposing measures like labelling AI-generated content, obtaining secure user declarations, and laying out verification systems. Earlier advisories from 2023 and 2024 first pushed platforms to take such content off within 36 hours, with the Election Commission tightening this to just three hours during election seasons—a reflection of how dangerous a single fake political clip can get in a country as sensitive and diverse as India. Finally, the judiciary too has waded in, with courts ordering major platforms like Meta and X to take down AI-generated obscene deepfake videos, marking a growing willingness to treat these cases as urgent and distinct harms. These steps reveal both that India is attempting to respond and how reactive the system still is in scrambling to keep pace with an ever-evolving threat.
Ahead, the road to actual protection lies through a more integrated, future-ready legal architecture. Experts have time and again highlighted the need for dedicated deepfake legislation that defines synthetic media, differentiates between malignant creations and harmless AI art or satire, and assigns responsibility along the tech supply chain—not just to social platforms, but to model developers and tool providers too. Victims need faster takedown mechanisms, a true right to erasure, and options for compensation. India also has to build a forensic infrastructure strong enough to detect deepfakes across languages and regions while it enhances cooperation with other nations to track cross-border offenders. Of equal importance is educating citizens, especially in rural and semi-urban areas, to question viral content rather than believing it implicitly. And as regulation tightens, it has got to protect free expression, creativity, and satire too—the cure must not become worse than the disease.
In India today, we are only partially shielded against deepfake dangers. While new regulatory drafts and advice are encouraging advances, the lack of a specialised legal regime, the rapid advancement of deepfake technology, and enforcement issues leave many people vulnerable.
The future of digital trust in India is dependent on how well policymakers, platforms, and society respond. Will India become a model for democracies dealing with synthetic media? Possibly, but only if its legal framework matches the scope of the situation. As per Hillary Clinton: "The 21st century is not just about who controls oil—it will be about who controls truth."
For the person, the question remains, and until the law and its implementation catch up, the answer is ambiguous.
Sources: