I am a law student, and I actively use AI in my pursuits. I use it to research, to draft, to clear doubts. I don’t even remember what it was like before ChatGPT took over. How I drafted, what I drafted, how much worse it might have been—I cannot recall. Even this article will be refined using AI, so that it’s free of what we call human errors. But that makes me wonder—how human is the piece once it passes through an AI filter? Does that take away the entire sapien footprint, or does it somehow make the writing more human, more intentional?
I started using AI the way one might use caffeine: casually, a little indulgence at first, something to help me write faster, think clearer. But now, it’s like oxygen. I cannot imagine writing an essay or even a legal note without first opening ChatGPT or some AI research tool. The dependency is quiet, creeping, and in some ways, comforting. Something is soothing about knowing that you can summon a legal precedent, a case summary, or an article draft in seconds. But then again, something is terrifying about that, too.
The legal field thrives on words, interpretation, and precision—and AI has entered that sanctum with ease. In India, law firms now use AI to review contracts, predict case outcomes, and automate compliance. Platforms like CaseMine, Manupatra AI, and LexisNexis Context are not just databases anymore; they are learning systems that can read, reason, and respond. That means a junior associate, who might have once spent sleepless nights cross-referencing judgments, now competes with a machine that never sleeps, never complains, and rarely forgets.
As a student, I find that duality interesting and exhausting. On one hand, AI saves me time. It lets me explore beyond what a textbook could teach. It helps me understand the Data Protection Act, decode constitutional law, or even simplify Bentham’s utilitarianism. But on the other hand, it sometimes feels like cheating—like I am being handed an understanding I did not earn.
Legally speaking, India still doesn’t have a dedicated law that governs AI. What we have instead are frameworks that indirectly touch it: the Information Technology Act, 2000, which broadly regulates electronic transactions and cybersecurity; and the Digital Personal Data Protection Act, 2023, which deals with how data—including that used to train AI—is collected and processed. Beyond that, there’s a vacuum. The NITI Aayog’s National Strategy for Artificial Intelligence (2018) was an early attempt to map the landscape, but we are far from having an AI Act like the EU’s 2024 legislation.
That gap is both exciting and scary. Exciting, because as a law student, you feel you are standing at the edge of something unmade—a space of intellectual potential. Scary, because AI is moving much faster than our jurisprudence. There are no clear answers to questions of accountability or liability. If an AI tool gives wrong legal advice, who is to be blamed? The programmer? The lawyer who relied on it? Or the machine itself?
Then there’s the question of bias. AI learns from data, and data carries the social, political, and casteist biases of its human creators. A biased algorithm making legal recommendations is a disaster waiting to happen. In the West, we’ve already seen AI-based sentencing tools accused of racial discrimination. India, with its layered caste hierarchies, could face something even more complex—how do you ensure an algorithm doesn’t reproduce centuries of systemic discrimination under the garb of “efficiency”?
But let’s come back to the personal. I use AI daily. I use it to frame arguments, to edit drafts, to find judgments I can’t recall. I also use it to write about pain, love, and law. The irony of that is not lost on me. When I tell ChatGPT to make something “sound human,” I’m really asking a machine to imitate emotion—to create a version of humanity that feels real. And maybe that’s what law is, too, in a way: an imitation of morality that tries to sound just.
The other day, my professor said something that stuck with me—“AI will not replace lawyers, but lawyers who use AI will replace those who don’t.” It sounded motivational at first, like one of those lines people post on LinkedIn. But later that night, it sank in as a warning. Because what if AI doesn’t just assist but alters the very nature of legal reasoning? What if, someday, your “critical thinking” is nothing but a polished algorithmic echo?
I also wonder what this means for accessibility. AI could democratize law in a country like India, where legal aid is often inaccessible. Imagine someone in a remote village using an AI chatbot to understand bail procedures or file an RTI. That could be revolutionary. But the same AI could also mislead, misinterpret, or hallucinate legal facts—and the person on the other side may not know enough to tell the difference. Technology widens access, yes, but it also widens the possibility of error.
There's also the economics of it all. AI tools cost money, and high-quality legal AI isn’t cheap. The divide between those who can afford intelligent assistance and those who can’t is only getting sharper. Which law student gets to learn faster? Which lawyer files faster? Which firm wins faster? The hierarchy of productivity is now algorithmic.
When I look back at my early days at law school, I remember writing everything by hand, arguing with classmates over interpretations, and searching judgments manually. Now, all of that feels ancient, almost romantic. I miss that slowness sometimes. The way your brain worked hard to piece things together, the little triumph when you found a rare case that proved your point. Now, it’s all instant. Efficient. Almost sterile. Maybe that’s the price we pay for speed—something deeply human slips away in exchange for precision.
Still, I continue to use AI. I refine this article through it. I ask it to rephrase, to fix punctuation, to suggest stronger verbs. I do it because I can, and because I must. It’s no longer about whether AI should exist in law; it’s about how we coexist with it. How do we ensure that as we automate the legal mind, we don’t erase the legal heart?
Maybe that’s the point of being human in the age of AI—to keep feeling even when machines start thinking.
References