Therapy bots: The new armchair psychologists trying to solve mental health problems

Tyagarajan S July 25, 2017

Story Highlights

  • Most therapy bots questions are based on a popular talk-therapy method called cognitive behavioural therapy, widely used to treat a variety of issues ranging from pain to depression
  • A vast majority of chatbots don’t even use sophisticated neural nets to learn, but instead rely on rather simplistic answer trees
  • When it comes to mental health, it would be dangerous to let these half-baked bots loose without human supervision

Recently, while chatting up with a voice-based assistant, I was informed that it now supported “empathy”. Looking to test the feature, I told the assistant that I was feeling sad.

“I am sorry you feel that way,” it chirped, in a clipped, official tone. It then recommended I do something to make me happy, like talk to someone. Duh! If I’d been really been sad, this cold and banal interaction would have only made things worse. Yet, more and more people are sharing their feelings with chatbots of various hues.

There are some advantages of this, like chatbots are always available and will hear us out patiently. We also know that they won’t judge us (quite unlike people).

A nice little market has emerged for dispensing therapy using chatbots. But, does it really work well enough now to address complex issues like mental health?  

This is why despite the clunky conversations, a nice little market has emerged for dispensing therapy using chatbots. We can imagine human-bot conversations being an inevitable part of our future. But, does it really work well enough now to address complex issues like mental health?

Also read: On the couch? Your therapist may soon be replaced by a robopsychiatrist

The bot therapists

“Hi Tyagarajan, can we do a check-in now?”

That was my therapist, reaching out to me through Facebook messenger. I click “Yes”.

“So tell me, what’s your energy like?”

I am given three options: High, medium and low.

This was Day 3 of the 14-day free session (post which I’d have to pay $39 / month for the service) with Woebot, a therapy chatbot created by a team of Stanford psychologists and AI experts.

I'm ready to listen, 24/7. No couches, no meds, no childhood stuff. Just strategies to improve your mood. And the occasional dorky joke
Woebot: I’m ready to listen, 24/7. No couches, no meds, no childhood stuff. Just strategies to improve your mood. And the occasional dorky joke

So far, my back-and-forth has been frustratingly limited. But I must confess: I’m a sceptic who’s trying to break it. The bot promises to get better as it learns more about me, but I suspect that at best, it will be an automated thinking coach, prompting me with questions like these:

OK which of the following is an example of fortunetelling?

  1. I’m worried about that presentation
  2. What if I make a fool of myself?
  3. I don’t like public speaking

Woebot’s questions are based on a popular talk-therapy method called cognitive behavioural therapy (CBT). CBT is widely used to treat a variety of issues ranging from pain to depression. It works by reframing gloomy thoughts about the self into a more factual context by peeling away negative assumptions.

Woebot’s questions are based on a popular talk-therapy method called cognitive behavioural therapy (CBT). CBT is widely used to treat a variety of issues ranging from pain to depression  

Woebot is one of many such mental health chatbots in the market. Joy, another chatbot supported on Facebook messenger, uses a similar methodology to get people talking about what they feel. Both operate on the basis of daily check-ins and dole out tips, techniques and content at regular intervals.

Karim, the Arabic-speaking chatbot, designed by the silicon valley startup X2AI, has been helping Syrian refugees deal with their mental health issues. The company has a series of specialised, polylingual bots targeted at various mental health needs and regions. For instance, there’s one targeted at paediatric diabetic care.

Wysa is an ‘emotionally intelligent’ chatbot that helps you track and manage your mood, and learn evidence-based techniques like CBT and mindfulness
Wysa is an ‘emotionally intelligent’ chatbot that helps you track and manage your mood

Closer home, there’s Wysa, a therapy bot from Bangalore-based Touchkin, a predictive healthcare startup founded in 2015. It’s similar to Joy and Woebot in that it too relies heavily on CBT techniques. A cute little penguin chats up with you and offers helpful suggestions and tips (the animated GIF that helped with deep breathing was pretty cool), apart from running little diagnostic tests to see if you need further help.

Why so many mental health bots?

Every hour, one student commits suicide in India. Just last week, an 18-year-old jumped from a 20-floor building. In 2016, more than 100 army men committed suicide. What used to be the occasional story of an IT engineer or a business owner committing suicide is becoming more common, especially as the market gets tough. And that’s just the suicides. A 2015 report from WHO stated that nearly 5% of India suffers from depression and another 3% suffers from other anxiety disorders.

Yet, seeking support in India faces multiple barriers. There’s the obvious social taboo that favours avoidance over acceptance and treatment. There aren’t enough psychotherapists around and finding them isn’t easy, especially if you aren’t near a big urban center. It may also be seen as a costly indulgence in a value-conscious environment where people seek tangible returns. Spending hundreds of rupees per hour for therapy may be beyond what many can afford.

A free, easily available private therapy chatbot like Wysa, therefore, is a massively disruptive solution which could bring access to mental health treatment to the forefront  

A free, easily available private therapy chatbot like Wysa, therefore, is a massively disruptive solution which could bring access to mental health treatment to the forefront. It’s not surprising then that the app has more than 50,000 downloads on Google Play store and is rated very well.

These bots promise to bring a sense of objectivity to the table. They are supported by deep learning neural nets that can sort through vast amounts of data and identify patterns with individuals. They promise to democratise the broken mental health care access in India.

But are they any good?

Chatbots are not great conversationalists. This is more evident when it comes to mental health where conversation is an important component
Chatbots are not great conversationalists. This is more evident when it comes to mental health where conversation is an important component

The problem with chatbots is that they don’t chat very well. Within a very narrow scope of algorithmised responses (where the bot provides suggestions on what to say), they barely wear the human mask. But stray just a little bit and the veneer falls away to reveal the crude idiot machine that lies at the back.

Deep learning neural nets have carried us rapidly into the realm of conversational artificial intelligence (AI), but they are still super specialists. The quest to build a multipurpose neural network is an early one. A vast majority of chatbots don’t even use sophisticated neural nets to learn, but instead rely on rather simplistic answer trees. As a result, what we have is a proliferation of crappy chatbots, not much unlike the website of the early days.

The capabilities of today’s bots may be enough to provide limited customer support or some basic automated commerce, but they’re unlikely to be sufficient to address mental health challenges.

The problem with chatbots is that they don’t chat very well. Within a very narrow scope of algorithmised responses (where the bot provides suggestions on what to say), they barely wear the human mask  

Machine learning-based tools often continuously learn using large volumes of data. While this works well in extremely well-structured data sets with a large volume of unambiguous data (like images or audio), such large volumes of well-codified data don’t exist in the domain of mental health. Plus, with a limited ability to understand what a user says and even more limited ability to respond to it, these bots hardly add to their base of data that would enable them to learn more.

Even when the bots are able to pick a suitable response for a conversation, they often lack empathy, which is a critical attribute when it comes to addressing mental fragility. A quick scan of the reviews for Wysa in the playstore indicates that many want the app to “really understand”, “just talk” and be more “human-like.”

Read other FactorFuture stories

To be fair, a human psychotherapist has a lot more information to base their reactions on. She can see facial expressions, the tone of the voice and body language to detect subtle unsaid cues. What the person receiving therapy thinks may be completely divergent from what she says. Humans are good at understanding intentions, whereas chatbots are no good so far.

These are areas that require long-term development and research like Google’s PAIR project (people+AI research initiative) which seeks to humanise AI and to make it inclusive and empowering. But. in the meantime, AI should ideally be seen as complementing human therapists.

Therachat does that by offering it as a support tool that therapists can customise for their patients. The additional data and the ability of the chatbot to continue sessions beyond the hours when the therapists actually meet their clients add value to the therapy.

Therachat complements therapists by helping them engage with clients between sessions
Therachat complements therapists by helping them engage with clients between sessions

Even as standalone bots, a real human supporting the interaction (like Facebook’s AI-powered assistant M), can help steer the limited bot towards human-like interaction. But this will require experts in psychology with clinical training interacting with the users and this can’t be scaled.

Koko addresses this in an innovative way by harnessing the combined power of a crowdsourced network and AI. You enter a negative thought into the bot which then distributes it among a network of real users whose responses on “rethinking” (reframing negative thoughts into more objective ones) are filtered back to the user. With time, the bot has started offering its own “rethinks” using all the historic data captured in the social database.

This is not to say that the simple one-on-one chatbots in the market today aren’t helping mental health patients. A study by Alison Darcy indicated that the use of bots resulted in a significant reduction in symptoms of depression for students who used it  

This is not to say that the simple one-on-one chatbots in the market today aren’t helping mental health patients. A self-reported, peer-reviewed study published by Alison Darcy, a Stanford clinical psychologist and the designer of Woebot, and a team of others, indicated that the use of bots resulted in a significant reduction in symptoms of depression for students who used it.

But, all said and done, these chatbots are nothing more than mental training / coaching apps. They don’t even claim to treat mental health issues.

There is a danger here

Mental health chatbots don’t take the Hippocratic oath. Nor are they independently regulated right now. Unlike a licensed psychotherapist, algorithms don’t need certifications to prove their legitimacy. A majority of mental health related apps and bots don’t really follow rigorous scientific method when it comes designing their interactions.

To be fair, most of these bots call out their role right at the beginning. The founders of these companies are careful to call these as life coaches or trainers rather than therapists. But is such a casual disclaimer enough to balance against the boatload of narrative on how the robot healers are coming to solve your mental troubles.

The danger here is that those seeking help may feel that they are addressing their issues by chatting with a bot. In India, where there’s a taboo around seeking mental health treatment, a readily available, free alternative could end up blocking real treatment.

Mental health chatbots don’t take the Hippocratic oath. Nor are they independently regulated right now. Unlike a licensed psychotherapist, algorithms don’t need certifications to prove their legitimacy  

There are other issues too. How do you prevent platforms from not selling / leveraging patient data (even if it is naturalised) to hungry advertisers? Even if the companies themselves maintain the highest level of privacy, they open up information to platforms like Facebook on which they rely for reach. Facebook could choose use our emotional states to further its advertising accuracy.

Even more dangerous is the security of these new platforms. There is no guarantee that the most intimate information on our psychological states won’t get hacked and shared publicly.

We’ll get there, eventually

AI is beginning to play a huge role in mental health. It will help psychotherapists and other stakeholders analyse a lot more information and present patterns that form the basis for making decisions. It can help gather behavioural evidence that is objective and captured at the right time. But chatbots engaging in mental state discussions? We are nowhere close to it yet.

Chatbots and voice interfaces will get smarter and more “human” and we could have deeper conversations on philosophy or even the meaning of life. At some point, we may even have fully autonomous therapy bots that can converse as well as a human, read the inflections in our voice and tone, use computer vision to read our facial expressions, and develop a sophisticated profile of our emotional state.

Layer on top the kind of rich, complex information obtained by combining data, both real time and historic, from disparate sources including wearables, our medical history, shopping patterns, lab test results, eating patterns, etc. At that point, we’ll have an individual profile that is complex and unique and way better than any human doctor may have been able to determine.

We aren’t there yet, though. And when it comes to mental health, it would be dangerous to let these half-baked bots loose without human supervision.

Also read: Digital initiatives are helping improve rural India’s mental health


               

Lead visual: Angela Anthony Pereira