Image by Tumisu from Pixabay
India’s mental-health landscape is changing faster than many of us imagined. What was once quietly suffering, masked behind forced smiles and “I’m fine,” is slowly finding its voice, often in a place you least expect: a chat window. For many young adults, especially students and early-career professionals, typing out their stress at midnight feels safer, simpler, even sacred, compared to sitting across from a stranger in an office.
Text-based therapy has quietly ushered in a small revolution. Discreet. Accessible. Less intimidating than traditional clinics. And when AI chatbots entered the scene, they brought the hope of always-available, judgment-free support. On the face of it, instant responses, zero waiting time, privacy, anonymity, it feels like a perfect match.
Yet beneath this wave of convenience lies a difficult ethical question: can AI really grasp the cultural, linguistic, and emotional realities of Indian life? Because therapy isn’t just about words. It’s about meaning, meaning rooted in context, culture, family, and history. And India’s social fabric is richly layered, complicated, alive.
There are simply far too few trained therapists to meet India’s needs. In cities, therapy can be expensive or stigmatised; in smaller towns or rural areas, even be nonexistent. For many, admission means long commutes, social exposure, unfamiliar languages, and awkward pauses. Text therapy, by contrast, removes several of these barriers: no appointments, no travel, no fear of being judged for how you look or speak.
Bring AI into the mix, tools that don’t tire, don’t judge, don’t expect an accent-free speech, and you have a system that can scale. Many people who otherwise might never have sought help suddenly have a low-threshold way to reach out.
In fact, 'Wysa', one of the better-known AI-driven mental-health apps, reportedly served about 528,000 people in India by 2022 via its English version. Its developers have acknowledged language as a barrier and started to roll out support in Hindi to reach broader demographics.
For people hesitant about stigma, distrusting clinics, or lacking access, AI chatbots can provide an emotionally safer first space.
But India is more than a uniform demographic. We speak in languages like Hindi, Urdu, Bengali, Tamil, Marathi, Malayalam, and dozens more, and we don’t just translate meaning; we feel it in those languages. Emotional expression, in our context, carries freight: with family expectations, social pressure, duty, unspoken understandings, and cultural weight.
Phrases like “is there a problem at home” or “what will people say” carry layers: social shame, economic stress, and family reputation. These aren’t just thoughts, they’re emotional worlds. When someone types them out, they are often navigating fear, guilt, duty, and shame.
Most AI chatbots, however, are trained on datasets rooted in Western emotional norms, often in English. To them, this is “text.” But for an Indian user, it might be the tip of a painful iceberg.
That gap between literal text and lived emotion lies at the heart of the problem. It’s not just translation; it’s translation of pain.
Western-style therapy often emphasises self-expression, personal boundaries, and individual autonomy. That works in contexts where personal freedom and direct communication are culturally acceptable.
But in many Indian households, relationships are communal. Decisions are shared. Expression isn’t always the goal; sometimes, restraint is. Saying “no” might feel disrespectful. Prioritising “self-care” may be seen as selfish when family obligations loom.
If a chatbot suggests “talk openly with your parents” or “take a break for yourself,” that advice, being in a Western context, can backfire spectacularly. It can lead to social friction, emotional danger, or even family conflict.
Similarly, cultural subtleties matter: indirect speech, sarcasm, understatement, social taboos, caste and class subtexts, and community pressures. AI often misses these or misreads them entirely. That makes neutral-sounding advice potentially tone-deaf, or worse, harmful.
Recent academic work highlights these challenges. A 2025 study of 362 Indian adolescents (with follow-up interviews) found that while many valued anonymity, privacy, and the idea of chat-based mental-health tools, most existing chatbots lacked cultural relevance and personalisation.
Another similar study of 278 adolescents reached related conclusions: although smartphone access was high and many preferred text over voice, social stigma and cultural contexts made them wary of using one-size-fits-all digital tools that failed to reflect their realities.
These findings suggest that many digital tools, designed with Western norms in mind, may not address, or may even misunderstand, the lived experiences of Indian users.
Part of the problem lies in a broader systemic issue: regulation (or lack thereof). A 2024 report from the National Institute of Mental Health & Neurosciences (NIMHANS) and collaborators found that most standalone mental-health apps in India are outside the domain of existing medical-device regulations.
Because of this, many apps are not subject to consistent standards for safety, efficacy, data privacy, or clinical validation. That matters deeply when emotional data, intimate, vulnerable all, is being shared. As critics have warned, some mental-health apps may operate like “data-sucking devices with a mental health veneer”, collecting sensitive user data under the guise of support.
Without stronger oversight, there’s little guarantee that the advice is safe, culturally appropriate, or delivered responsibly.
Does this mean AI chatbots are the enemy? Not necessarily. They can, and do play a valuable role. For many, they are the first point of contact: a place to unburden when nothing else feels possible. They can help name the pain, hold space, offer grounding tools; sometimes that is enough to keep someone afloat.
But for therapy to work in India, meaning therapy that respects cultural context, AI cannot go it alone. The most ethical, effective path likely lies in a hybrid model: AI for early, low-threshold support; human therapists for deeper, culturally and emotionally sensitive work.
In other words, let AI provide the doorway, but let humans take the hand when the path becomes complicated.
We need mental-health tools that don’t just parse words but listen to stories. Stories shaped by family, caste, class, language, and duty. Stories that deserve more than generic responses.
References: