Photo by Jotform on Unsplash
The digital world has transformed how societies think, act, argue, celebrate, protest, and sometimes even fracture. In this vast and volatile new public square, a single post can ignite debate, shape electoral sentiment, trigger communal tensions, or spark mass mobilisation. Amidst this rapidly shifting landscape, the Supreme Court of India, on 27 November 2025, issued a consequential directive that may reshape the country’s digital future. The Court asked the Ministry of Information and Broadcasting to prepare a draft mechanism for pre-screening user-generated content before it is uploaded on social media. This marks a profound shift, moving India toward a preventive model of digital regulation rather than a takedown-based reactive one.
The bench of Chief Justice Surya Kant and Justice Joymalya Bagchi described the right to free speech as important, but emphasised that it is a “regulated right,” unlike the United States’ absolute First Amendment protection. The judges reflected openly on the speed and scale at which online content spreads. If an individual posts material that is inflammatory, obscene, defamatory, or potentially harmful to national security or public order, it can reach thousands or millions within hours. By the time the state notices it, assesses it, and orders a takedown, the harm may already be done. The Court noted that in India’s emotionally charged and socially diverse environment, even a single provocative post can lead to unrest, moral panic, or communal trouble. In their eyes, the existing framework, self-regulation by platforms, guidelines for intermediaries, and after-the-fact legal remedies, is proving increasingly inadequate.
This judicial intervention arises in a larger context: the extraordinary rise of creators who publish independently, without editorial oversight, without institutional backing, and without the checks and balances that traditional media employs. A YouTuber, podcaster, or influencer today may command a larger audience than the average newspaper. Yet such creators often face no framework that ensures responsibility. According to the Court, self-regulatory codes may work for mainstream broadcasters or OTT platforms, but not necessarily for the sprawling universe of user-generated content, where anyone with a smartphone is a broadcaster.
At the heart of this issue lies a paradox. Social media democratized speech, dismantling barriers of geography, class, caste, education, and institutional power. The powerless found a voice; the marginalised found a platform; dissenters found a megaphone; movements, MeToo, farmers’ protests, and anti-corruption campaigns found momentum. Yet the same platforms have enabled misinformation, hate campaigns, defamation, deepfakes, conspiracies, and targeted harassment. They have magnified not only voices but also vulnerabilities.
The Court’s order is both a recognition of this reality and a response to it. But the implications are far deeper than a procedural shift. Pre-screening content before publication touches the very core of democratic expression. It raises questions about censorship, individual autonomy, state power, and the delicate architecture of free speech.
There are compelling arguments in favour of such pre-screening. The most immediate is the velocity problem. In earlier eras, harmful content travelled slowly through pamphlets, word of mouth, or newspapers with limited reach. Today, one viral video can create chaos before authorities even finish their morning tea. A rumour about communal targeting, a deepfake of a political leader, a doctored clip that inflames sentiments, or a fabricated story about child abductions can lead to lynchings or riots. India has repeatedly witnessed violence triggered by false information spread online. In such situations, a reactive framework is not simply inadequate; it is futile. Once a spark becomes a fire, pulling down the original post is like shutting a barn door after the horse has bolted.
Pre-screening could also better protect vulnerable groups. Women, minorities, children, persons with disabilities, LGBTQIA+ individuals, and numerous others are often targets of harassment or objectification online. Content that demeans or endangers them tends to spread rapidly because sensationalism thrives in the digital ecosystem. A preventive filter could reduce harm to reputation, dignity, and mental well-being. For victims facing viral harassment, even a few hours of exposure can have devastating consequences.
Supporters also argue that pre-screening may elevate the quality of content overall. Knowing that material will be vetted before publication could encourage creators to be more thoughtful, responsible, and accurate. Much of social media thrives on impulsive posting, driven by instant gratification, outrage, or the chase for clicks. A delay caused by screening might slow down the most reckless impulses, leading to more deliberate, less inflammatory communication.
Yet the strongest arguments in favour of pre-screening pale in comparison to the concerns it raises that are not theoretical but grounded in constitutional principles, historical experience, and global evidence.
Foremost among these concerns is the threat to free speech. Any mechanism that evaluates speech before publication automatically becomes a form of prior restraint, one of the most dangerous tools in the state’s arsenal. Even if the intentions are noble, the potential for misuse is enormous. India has a long history of vague terms, “anti-national,” “offensive,” “against public morality,” “hurting sentiments”, being used to stifle dissent, satire, art, activism, and minority viewpoints. Once a pre-screening mechanism exists, governments of the future may be tempted to expand its scope, using it not only to prevent harm but to silence criticism.
Vagueness is another problem. Who decides what is harmful or offensive? Without precise definitions, moderators, whether human or algorithmic, may err on the side of caution, blocking legitimate speech out of fear of repercussion. Satire, nuanced political commentary, artistic expression, dissent, and cultural critique could all fall victim to over-regulation. An AI filter may misinterpret a political cartoon; a human moderator may block content that is inconvenient to dominant narratives.
Beyond conceptual concerns lie practical challenges. The sheer volume of content produced by Indian users every minute is staggering. To screen every post before publication would require massive infrastructure, both technological and human. Automated systems, however advanced, struggle with context, cultural nuance, satire, slang, code-switching, and the subtleties of India’s countless languages. Human moderators, no matter how well trained, bring their biases. Delays in approval may frustrate users, distort conversations, and fracture the spontaneity that defines digital culture. In practice, such regulation could become slow, costly, inaccurate, and intrusive.
There is also the issue of privacy and anonymity. To pre-screen effectively, platforms might be compelled to collect more information about users, verify their identity, or store sensitive data. This could normalise digital surveillance, allowing the state, or private entities, to track citizens’ opinions, associations, and ideologies. In moments of political tension, such data could be misused to target dissenters or suppress protests. Anonymity, which has historically protected whistleblowers, activists, and vulnerable communities, could be lost.
Moreover, pre-screening risks entrenches dominant narratives and excludes marginalised ones. Historically, dissenting ideas often begin as minority opinions. Social media enabled many such voices to bypass traditional gatekeepers. But if a centralised authority decides what is fit for publication, voices that challenge the mainstream may be filtered out. This is especially dangerous in a country as diverse as India, where communities differ drastically in beliefs, histories, and sensitivities. What one group views as necessary social critique, another might consider offensive. Moderators may align with the majority perspective, marginalising those with unconventional or critical viewpoints.
The Court’s directive arrives at a pivotal moment in global debates on digital governance. Many countries are grappling with a similar dilemma: how to protect citizens from harmful content without infringing on fundamental freedoms. Democracies worldwide are discovering that content moderation is not merely a technical problem but a philosophical one. There is no perfect solution. Any regulation must navigate complex trade-offs between safety and liberty, order and creativity, protection and autonomy.
The Indian context adds layers of complexity. Social media has not only democratized speech but also exposed the fault lines of caste, religion, region, language, gender, and politics. A preventive regulation might indeed reduce incidents of violence or harassment, but it may also curtail the very spaces where social justice movements gather, where whistleblowers speak up, and where unpopular truths are voiced.
The key question is not whether regulation is needed; some form of regulation is undeniably necessary, but how it is designed. A harmful design could erode rights. A thoughtful design could balance freedom and safety.
For pre-screening to be legitimate and humane, several safeguards are essential. The standards for what constitutes prohibited content must be narrow, objective, and clearly defined, leaving little room for subjective interpretation. The authority that screens content must be independent, transparent, and politically neutral, with no direct control by the government. Every decision to block content must be documented, communicated to the creator, and subject to appeal before an impartial body. Users’ privacy must be protected rigorously, with no unnecessary collection of sensitive data. Any automated system must be used only for simple categories of harmful content, such as known hate slurs or explicit violence, while human moderators handle more nuanced cases. Public consultation should be central to shaping the policy, ensuring that creators, civil society, technologists, media scholars, and marginalised communities all have a voice.
Most importantly, dissent, critique, and satire must remain protected. A democracy loses its vitality when criticism is stifled. A society cannot mature if it sanitises or sterilises its discourse. Regulation should aim at preventing real harm, not at controlling political speech or shaping social narratives.
The Court’s verdict is significant not just for what it mandates but for what it symbolises. It acknowledges that India stands at a crossroads in its digital journey. The country can move toward a future where online spaces are safer, more respectful, and less prone to violence, but also more controlled, more filtered, and perhaps less democratic. Or it can strive for a balance: a digital ecosystem where responsibility and freedom coexist, where regulation curbs harm without curbing thought, where the state protects citizens without policing their minds.
The challenge will be immense. Designing a pre-screening mechanism that safeguards both public order and constitutional freedoms will require sensitivity, wisdom, foresight, and humility. It will require accepting the complexity of speech, the diversity of Indian society, and the inevitability of disagreement. It will also require remembering that free expression is not merely a legal right but a cultural and moral value, one that allows societies to grow, challenge themselves, and correct their course.
The title “Before the Click” captures the dilemma aptly. At that moment before a post goes live—before the click—we stand on the edge between possibility and peril. The digital world has opened doors humanity never imagined. But it has also opened questions we are still struggling to answer.
India now has the opportunity to craft a model of digital governance that is neither libertarian nor authoritarian, but deeply democratic. Whether it succeeds will depend not only on laws and guidelines but on the values we uphold. A society’s commitment to free speech is tested not when it protects popular opinions, but when it protects unpopular ones. A society’s commitment to harmony is tested not when it silences dissent, but when it channels disagreement into dialogue.
The coming months will determine whether India builds a framework that respects both freedom and responsibility or tips too far in favour of one at the expense of the other. In the balance lies the future of India’s digital democracy: vibrant, messy, pluralistic, and indispensable.