Photo by Marten Bjork on Unsplash
Something unusual happened in Thiruvananthapuram recently. Kerala's cyber police filed a formal case and an FIR not just against an individual user on social media, but against the platform itself - X, the company formerly known as Twitter and now owned by Elon Musk. The trigger was a short AI-generated video, just one minute and seventeen seconds long, allegedly showing Prime Minister Narendra Modi and senior officials from the Election Commission of India (ECI) in a distorted and misleading.
The account behind the post was identified as @valiant_Raju. But what made this case notable was that the police didn't stop at the user. They went after the platform too, sending formal notices to X demanding the video be taken down and asking for technical information about the account.
The FIR was filed at the Cyber Crime Police Station in Thiruvananthapuram after the video was flagged through official channels, including the ECI, which warned that such manipulated content could mislead voters and damage the credibility of constitutional institutions.
Authorities said the clip, shared by the account @valiant_Raju, had the potential to influence public perception during a politically sensitive period and undermine confidence in the electoral process.
The police didn't treat this as a minor online dispute. They registered the case under the Bharatiya Nyaya Sanhita and the Information Technology Act laws that cover defamation, misinformation, and the distribution of manipulated digital media. In simple terms, these are serious legal provisions, not a light warning.
India holds the world's largest democratic elections. Hundreds of millions of people vote, many of them relying on the information they receive online to form their opinions. In this environment, a viral video, even one that is completely fabricated, can shape how people think about candidates, institutions, and the process itself.
Experts note that the spread of manipulated political content has become a global concern, particularly during elections, where even a short viral clip can shape narratives or influence voter perception. This is not an abstract worry. Elections around the world have already seen deepfakes used to confuse voters and discredit leaders.
India's election regulator clearly recognised this risk. The fact that the ECI itself flagged the video through official channels and that the cyber police responded quickly suggests that there is at least some awareness within the system of how fast this kind of content can spread and how much damage it can do.
Here is where things get genuinely complicated. Filing a case against a platform like X raises serious questions about free speech online. India has a long and sometimes contentious history of using IT laws to act against social media content, and critics have often argued that these laws can be misused to silence legitimate criticism or political commentary.
What this incident highlights is the urgent need for clearer rules, stronger digital literacy, and greater transparency from both governments and technology platforms. Social media companies must strengthen safeguards against harmful misinformation, while authorities must ensure that enforcement measures are fair, proportionate, and not used to suppress legitimate criticism or dissent.
That is the tension at the heart of this story. AI-generated misinformation is a real threat to democracy. But heavy-handed legal responses to online content are also a threat, which is a different kind, but no less serious.
The answer cannot simply be "take down everything that looks suspicious." Satire, parody, and criticism of powerful people are essential parts of democratic life. A deepfake designed to defame is different from a cartoon mocking a politician, but the legal tools used to address the former can easily be turned against the latter.
The Kerala case should push three conversations forward simultaneously. Platforms like X need to take faster, more consistent action against AI-generated content that is clearly designed to mislead, not just when police come knocking. Governments need laws that are precise enough to target genuine disinformation without becoming instruments of political censorship. And citizens need better tools to identify what is real and what is not.
As AI technologies become more accessible, authorities say they are strengthening digital monitoring mechanisms and collaborating with social media companies to detect and remove misleading material more quickly. That cooperation between regulators and platforms is necessary. But cooperation works best when the rules are clear, the process is transparent, and the power to remove content is not concentrated in any one place.
A one-minute-seventeen-second video sparked a national conversation. That, at least, is a sign that people are paying attention. The question is whether the response of legal, technological, and social will be thoughtful enough to actually protect democratic institutions without undermining the freedoms that make democracy worth protecting in the first place.
References: