Image by Pixabay

For far too long, the mental health care approach has been reactive. The care only comes after the crisis—the meltdown, the overdose, the attempt—has come and gone. Not out of a lack of sympathy, but out of a lack of predictive technology. We've been attempting to put out fires rather than equipping smoke detectors. At its heart is hope for a new model, fueled by the digital revolution: the application of big data and artificial intelligence to anticipate mental collapses before they happen.

This new strategy is based on tracking the slender, digital thread we leave behind in the hopes of capturing the very first murmurs of mental distress, potentially shifting mental health from a response model to one of prevention and early recognition. The genius of the concept is in the sheer volume of data that we produce every day.

This information is a rich, minute-by-minute account of what we're doing, and maybe an effective substitute for our actual state. The data sources are all around us and everywhere. Our phones follow us everywhere, tracking our sleeping patterns through screen time and activity, and tracking social behavior, counting our calls, messages, and social media activity. The words that we write in emails, text messages, and social media posts can be analyzed to see changes in word, grammar, and tone.

Depressed patients would write more first-person pronouns, a lower frequency of words associated with positive emotions, and simpler sentences. Wearable technology, such as smartwatches, places another layer on top, constantly tracking physiological metrics of heart rate variability, resting heart rate, and activity level, all of which have very high correlations with anxiety and stress. None of that underlying data does anything.

The magic occurs when it gets processed through machine learning algorithms. They are sophisticated computer algorithms that possess the ability to identify patterns in data that are not possible to see with the naked eye. It is where training starts. These models are exposed to available data of individuals, familiarizing them with electronic markers put before them, before a proven mental breakdown.

For example, they can expose the model to the user's data a few weeks before inpatient hospitalization for acute depression: their sleep grew restless, social contact went to zero, speech patterns altered, and heart rate tracked, increasing anxiety. The algorithm can learn to detect this deadly set of signals. Having learned, the model can then proceed to sort through real data from a user's real, real-time inputs. It is constantly looking for the same tell-tale signs, not to flag the existence of some illness, but to make an educated guess at the likelihood of an impending crisis. It can alert the person and his physician: "Risk of depressive episode has increased by 70% over the last week." The potential benefits of this prediction method are enormous.

Quite obviously, there is the potential saving of lives, most directly in the prevention of suicide. With the vision of who is at risk from the web usage history, efforts can be made to help them before they become so depressed. On an even greater scale, it enables proactive and specific treatment.

Instead of a patient having to wait a month for their second appointment to inform their therapist that they are having issues, their therapist would be aware and contact them within days of having issues. This is helpful so that the assistance will be provided at the most needed time and will help the most.

This also brings medicine into an individualized system. It's trained on one person's own baseline—what "normal" to them is like—and then it can pick up on extremely small differences that are meaningless to another person but humongous red flags to this particular person. It might even spill over into mental illness, with a superimposed overlay of ambient, passive surveillance that way, way outstrips a therapist's waiting room and into a person's everyday life. But this useful gadget is also full of ethical risks that need to be scrupulously sidestepped.

Most obvious is probably privacy. This method constitutes ubiquitous, private monitoring of an individual's most intimate routines: with whom they speak, what they speak, with whom they share living space, and how their body handles stress. The possibility of misuse by employers, insurers, or government authorities is chilling. Good jurisprudence and unhackable crypto are not a nicety, but a necessity. There's always, of course, a possibility of algorithmic bias as well. Unless training samples for these algorithms are disgustingly diverse, then the algorithms will not operate on populations not represented there. A model trained primarily on information about wealthier, tech-savvy students would be useless or worse to forecast crises among the poor, elderly, or rural communities and would only widen health disparities. In addition to privacy concerns, there is the psychological effect on the patient.

The constant monitoring itself can become the source of distress, a sort of "quantified self-anxiety." What if the algorithm made a mistake? A false alarm, alerting an individual that he is moving in the path of a crisis and he is not, would cause unwarranted panic and worry. A false alarm of omission, not warning an individual in the midst of an already happening crisis, would be a catastrophic and heart-wrenching loss of public faith in the system. There is also the added danger of dehumanization. Can it be argued that an algorithm is capable of understanding the intangibles of human despair? Dependence on digits can lead clinicians to forget the subtle, qualitative nature of the individual's testimony that arises only through empathetic conversation. Human therapists need not be supplanted by machines.

Pointless and perilous that would be. The intention is to devise an instrument that supports human judgment. Picture that a therapist may, at the beginning of an hour, look at a data screen that indicates her patient's sleep has worsened, contact has dropped very low, and the language has remained improved in the last two weeks. Such objective facts keep things real and can help move the session in the direction of the here-and-now issues, setting the hour more efficiently and effectively. The human nurse cannot be substituted with machines in the sense that it is a question of extending empathy, situating information within the world of a patient, and forming the trust that is indeed the location of healing. It is only possible to build such a future responsibly in cooperation.

The technologists need to create systems where privacy and ethics become an inherent part of their very design, and not an afterthought added on later. Regulators need to offer plain guidance that safeguards against speculation that mental health will be employed to discriminate and against data misuse. Clinicians need to be taught how to treat this new form of data and how to integrate it into what they already do without losing their own humanity. And basically, people need to own their own data, with open opt-in consent and access to view and comprehend what the algorithms are communicating about them. The potential of big data to foretell mental health emergencies is vast.

It's a hopeful picture of the future in which we may reach out and assist before the fall, when suffering is noticed and reduced as it happens. It is an absolute reversal of looking for sickness in an effort to seek health.

The ethics are horrible, but they are not impossible. With love and respectful behavior, with unbreakable commitment to the humanity of human beings, we can use the power of information not to control and sort out, but to understand, help, and ultimately heal more deeply than ever before.

.    .    .

Discus