There is a quiet but important conversation happening inside India's courtrooms, not about any particular case, but about who, or what, gets to decide them. On April 4, 2026, the Gujarat High Court released a policy that drew a firm line that Artificial Intelligence will not be allowed anywhere near the core of judicial work.
No AI for writing judgments. No AI for bail decisions. No AI for evaluating evidence. No AI for interpreting law. The message was clear that the bench belongs to humans, and it will stay that way.
This is a decision worth understanding, because it arrives at a moment when AI tools are being used everywhere, from writing emails to helping doctors diagnose illness. So why should courts be any different? The Gujarat High Court's answer to that question is both practical and philosophical, and it deserves a closer look.
The policy was announced at a conference of district judiciary judges. In plain terms, it says that AI tools cannot be used for any part of the process by which a judge reaches a decision. This includes reading and interpreting the law, weighing facts, evaluating evidence, deciding someone's rights or liabilities, and writing the final order or judgment.
It also bars judges and court staff from feeding sensitive information into AI systems that involve details about parties involved in a case, witnesses, or anything related to a matter still being heard in court. The reasoning is straightforward a judge is personally responsible for every word in every order they sign. That responsibility, the policy makes clear, cannot be handed over to a machine.
The court has not turned its back on technology entirely. AI is permitted to assist with legal research, looking up past cases, and identifying relevant precedents. It can be used for administrative tasks, managing case records, drafting internal circulars, and building training materials. In other words, AI can help with the paperwork around justice, but not with justice itself.
There is one firm condition that runs through all of this that any output produced by an AI tool must be checked and confirmed by a human being before it is used in any official capacity. The machine can assist. It cannot certify.
The Gujarat High Court flagged three specific risks that make AI unsuitable for judicial decision-making, which include hallucinations, bias, and confidentiality breaches. The word "hallucinations" might sound unusual in a legal context, but it refers to something very real. AI tools, including the most advanced ones available today, sometimes produce information that is entirely made up. They can cite cases that do not exist, quote laws that were never passed, and present false information in the confident tone of established fact. In everyday life, this is an inconvenience. In a courtroom, where someone's liberty, property, or livelihood is on the line, it is a serious danger.
Bias is the second concern. AI systems learn from the data they are trained on. If that data reflects historical patterns of discrimination or unequal treatment, which much legal data does, it will result in the AI absorbing and reproducing those patterns. A tool that appears neutral on the surface may be quietly reinforcing old injustices beneath it.
The third worry is confidentiality. Court cases involve deeply personal information. When that information is entered into an AI system, it may be stored, processed, or exposed in ways that violate the privacy of the people involved. The court rightly treats this as a risk that cannot be taken lightly.
Beyond the specific risks, there is a larger point being made here, which includes one about what courts are actually for. A court is not simply a place where information is processed and an output is generated. It is a place where human beings, caught in difficult and often painful situations, come to be heard and to receive justice. The judge listening to them is not just a processor of facts. They bring judgment, experience, empathy, and accountability to the work. They can be questioned, challenged, and held responsible for what they decide.
An AI system cannot be held responsible. It cannot be cross-examined. It does not understand what it means for a family to lose a home, or for someone to spend an extra year in prison because of a poorly reasoned bail decision. These are not flaws that better technology will eventually fix. There are fundamental differences between human judgment and machine processing.
The Gujarat High Court seems to understand this distinction clearly. By limiting AI to what the policy calls "the narrowest conceivable scope," the court is not being anti-technology. It is being pro-justice.
What makes this policy thoughtful rather than merely cautious is that it does not reject AI altogether. Legal research is time-consuming. Case management is administrative and demanding. Drafting routine internal notices takes hours that could be spent on substantive work. If AI can help with these tasks and help staff be more productive, then that is a genuine benefit worth embracing. The distinction the court draws is between AI as a tool that supports human work and AI as a substitute for human thinking. The first is welcome. The second is not.
References: