Antipattern: The hammer of ML/AI in mental health tech

I’ve already written about the tendency of established businesses to sequester (the machine learning version of) data science either by placing it with a business intelligence team or isolating it in a special innovation/R&D group. By “data science” here I do not mean one-off statistical analyses (e.g., aimed at some sort of evaluation task) but rather the design, development, and deployment of machine-learning models, which makes up the bulk of corporate data scientific efforts today. We could also call that artificial intelligence… but AI is more than ML.

I digress.

An equally common antipattern in using the latest data scientific techniques (=ML = AI very roughly and inaccurately speaking) is to take an AI-first approach to product development. I see this frequently in the mental health technology startup space, where some companies are so focused on using machine learning (which they will always call “artificial intelligence”) that they disregard more important product development efforts they could take on instead, efforts that might have a much bigger impact on people’s mental health.

One common example of this is the efforts of companies to develop depression- or other mental health problem detection/warning systems. Take Mindstrong. Using smartphone interactions such as how fast a user types, Mindstrong seeks to predict upcoming moods and preemptively identify “individuals who may need immediate attention.”

To me, this seems like starting with your favorite hammer and looking at where there are nails sticking out, rather than figuring out what handyman project is most important for you to do first. The hammer is ML, which is good at taking large volumes of data and predicting outcome variables based on patterns detected in the data. Where might we use that in mental health? Predicting depression, imminent suicidal acts, or other mental health problems seem a likely target.

There are a couple of problems with this:

  • There’s not a good reason at this point to think we can develop good training sets for such prediction models, and
  • It’s not clear what to do with the predictions (assuming they’re any good) in order to have an appreciable impact on outcomes

Regarding the first problem: finding depressed individuals in the wild isn’t like finding cats in a set of digital images. Humans can tell which pictures have cats. So it’s not too hard to create a data set of images labeled as to whether they show cats or not. That’s the kind of training set you need to create an accurate machine learning model. On the other hand, humans aren’t good at diagnosing depression. Yes, there are a variety of psychometric tools for assessing it, both clinician- and self-rated. But this doesn’t mean that by taking a data set that has been labeled with one or another measure (e.g., the Hamilton Depression Rating Scale) that we’ve produced a good training data set for a machine to learn from in order predict depressive problems.

Diagnosing a mental health problem is fraught with challenges. Symptoms that might be indicative of depression–fatigue for example–can arise from other health issues. Clinician ratings and self ratings have their own unique problems. It’s not my purpose here to write a thesis on psychometric instruments for screening for or diagnosing depression. But just take it as a given that this is not an easy problem to solve, and without a good solution, we’re limited in our ability to build good machine learning models to identify depressed people.

The second problem is doing anything useful with the predictions even if you could make them accurately. So you’ve built a computerized system for predicting depression based on someone’s phone usage. You see that the person has a pattern characteristic of depression. What then? Do you suggest they go see a therapist? Which therapist? Do you send them to their primary care doctor? Do you offer some immediate CBT-style skills training? This problem of what to do when someone is depressed in order to help them feel and function better is the far bigger one. Plenty of people can recognize that they are feeling like crap and failing to behave in a way that works well. What we’re lacking is economic and effective ways to help them feel not like crap. That’s the problem mental health startups might focus on first for highest impact. Yes, it may be very helpful in the future to be able to detect the arising of depression before a person is aware that they are suffering. But first let’s figure out how to address the problems of people who are already suffering so much they are seeking help.

As another example of the problems of the AI-first approach to mental health technology development, take the proliferation of mental health chatbots. As a variety of startups have recognized, it’s possible to use some ML/AI techniques to produce a chatbot that will engage in conversation with a user, and it can recognize various intents and entities, through the use of ML/AI. But is this a good approach to improving someone’s mental health? Since we don’t know exactly how in-person one-on-one therapy works–is it because of empathy? Human connection? Insight? Skills development? Catharsis?–it’s hard to know whether a computer chatbot will help people feel better.

Cognitive-behavior therapy, the most commonly-cited evidence-based mental health talk therapy today, is a skills-based approach. It may be that better educational apps will actually have a more substantial impact on mental health than apps that can converse well. An educational app need not start with an AI-based approach or with a conversational interface. In fact, it probably shouldn’t. Instead the app developers could think about how best to teach and reinforce a set of cognitive reframing skills with or without artificial intelligence.

In many respects, this problem reflects a bigger problem in mental health care: we really don’t have great solutions for helping people to feel better! The two main modes of assistance–pharmaceuticals and talk therapy–are limited in their effectiveness in the real world, despite experimental evidence suggesting what look like moderate to large effects in controlled settings. These approaches are expensive too. Pharmaceuticals are expensive in terms of side effects, even if not in terms of monetary costs. Talk therapy is expensive in terms of time and money.

I feel optimistic about the possibilities for technology to have an important impact on people’s mental and emotional well-being. The ML/AI flavor of data science will certainly play a critical role in that.

Where we are today with digital mental health interventions makes me think that we need to start first with non-ML, non-AI data science: better outcome metrics, better evaluation techniques, better instrumentation of apps, and better analysis of how outcomes relate to treatments. Putting this foundation in place first seems critical to creating AI-enabled digital mental health products that actually help people feel and act better.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s