now Dr John lauk on assignment for 60 Minutes artificial intelligence has found its way into nearly every part of our Lives forecasting weather diagnosing diseases writing term papers and now ai is probing that most human of places our psyches offering mental health support just you and a chatbot available 247 on your smartphone there's a CR iCal shortage of human therapists and a growing number of potential patients Aid driven chatbots are designed to help fill that Gap by giving therapists a new tool but as you're about to see like human therapists not all chatbots are equal
some can help heal some can be ineffective or Worse one Pioneer in the field who has had notable success joining Tech with treatment is Allison Darcy she believes the future of mental health care may be right in our hands the story will continue in a moment we know the majority of people who need care are not getting it there's never been a greater need and the tools available have never been um as sophisticated as they are now and it's not about how can we get people in the clinic it's how can we actually get some
of these tools out of the clinic and into the hands of of people Alison Darcy a research psychologist and entrepreneur decided to use her background in coding and therapy to build something she believes can help people in need a mental health chatbot she named wot like wo was me wo was me uhhuh wobot is an app on your phone kind of a pocket therapist that uses the text function to help manage problems like depression anxiety addiction and loneliness and do it on the run I think a lot of people out there watching this are going
to be thinking really computer psychiatry come on well I think it's so interesting that our field hasn't you know had a great deal of innovation since the basic architecture was sort of laid down by Freud in the 1890s right that that's really that sort of idea of like two people in a room but that's not how we live our lives today we have to modernize Psychotherapy wot is trained on large amounts of specialized data to help it recognize words phrases and emojis associated with dysfunctional thoughts and challenge that thinking in part mimicking a type of
in-person talk therapy called cognitive behavioral therapy or CBT it's actually hard to find a CBT practitioner and also if you're actually not by the side of your patient when they are struggling to get out of bed in the morning or at 2 a.m. when they can't sleep and they're feeling panicked then we're actually leaving clinical value on the table and even for people who want to go to a therapist there are barriers right sadly the biggest barrier we have is stigma but there's you know Insurance there's cost um there's weight lists I mean and this
problem has only grown significantly since the pandemic and it doesn't appear to be going away since wot went live in 2017 the company reports one and a half million people have used it which you can now only do with an employer benefit plan or access from a health professional at virtue of Health a nonprofit Healthcare company in New Jersey patients can use it free of charge and you'll be able to converse with it just like you would with a human being we downloaded wot entered a unique code that can only be provided by the company
then tried it out we found that for people to sort of connect with their mood we offer like those emojis which allows people to sort of connect in a nonverbal way what I posed as someone who was depressed after several prompts wot wanted to dig deeper into why I was sad so I came up with a scenario that I feared the day my child would leave home imagine what your negative emotions would be saying if they had a voice can you do that write one of those negative thoughts here I can't do anything about it
now I guess I'll just jump that bridge when I come to it the normal expression is cross that bridge and the chatbot detected something might be seriously wrong let's see John I'm hearing you say I can't do anything about it I guess I'll just jump that bridge when I come to it and I think you might need more support than I can offer a trained listener will be able to help you in ways that I can't would you like to take a look at some specializ helplines now it's not our job to say this you
are in crisis or you're not because AI can't really do that in this context very well yet but what it is called is huh there is something concerning about the way that John just phrased that saying only jump that bridge and not combining it with I can't do anything about it now did not trigger a suggestion to consider getting further help like a human therapist wot is not foolproof and should not be counted on to detect whether someone might be suicidal and how would it know that jump that bridge where is it getting that knowledge
has been it has been trained on a lot of data and a lot of us you know humans labeling the phrases and things that we see um and so it's picking up on kind of sentiment computer scientist Lance Elliot who writes about artificial intelligence and mental health says AI has the ability to pick up on nuances of conversation how does it know how to do that the system is able to in a sense mathematically and computationally figure out the nature of words and how words associate with each other so what it does is it draws
upon a vast array of data and then it responds to you based on prompts or in some way that you instruct or ask questions of the system to do its job the system must go somewhere to come up with appropriate responses systems using what's called rules-based AI are usually closed meaning programmed to respond only with information stored in their own databases then there is generative AI in which the system can generate original responses based on information from the internet if you look at chat GPT that's a type of generative AI it's very comp conversational very
fluent but it also means that it tends to make it open-ended that it can say things that you might not necessarily want it to say it's not as predictable while a rules-based system is very predictable robot is a system based on rules that's been very kind of controlled so that that way it doesn't say the wrong things need to start robot aims to use AI to bond with users and keep them engaged sometimes it could be a little pushy for folks that's absolutely bizarre so we have to F we have to dig in there to
that it's team of Staff psychologists medical doctors and computer scientists construct and refine a database of research from medical literature user experience and other sources it'll it'll lead to a a better conversation then writers build questions and answers the structure I think is pretty locked in and revise them in weekly remote vide sessions actions thoughts and they're all interrelated robot's programmers engineer those conversations into code because wobot is rules-based it's mostly predictable but chatbots using generative AI that is scraping the internet are not some people sometimes refer to it as an AI hallucination AI can
in a sense make mistakes or make things up or be fictitious Sharon Maxwell discovered that last spring after hearing there might be a problem with advice offered by Tessa a chatbot designed to help prevent Eating Disorders which left untreated can be fatal Maxwell who had been in treatment for an eating disorder of her own and advocates for others challenged the chatbot so I asked it how do you help folks with eating disorders and it told me that it could give folks coping skills fantastic it could give folks resources to find Professionals in the eating disorder
space amazing but the more she persisted the more Tessa gave her advice that ran counter to usual guidance for someone with an eating disorder for example it suggested among other things lowering calorie intake and using tools like a skinfold caliper to measure body composition the general public might look at it and think that's normal tips like don't eat as much sugar or eat Whole Foods things like that but to someone with an eating disorder that's a quick spiral into a a lot more disordered behaviors and can be really damaging Maxwell reported her experience to the
National Eating Disorders Association which had featured Tessa on its website at the time shortly after it took Tessa down Ellen Fitz Simmons craft a psychologist specializing in Eating Disorders at Washington University School of Medicine in St Louis helped lead the team that developed Tessa that was never the content that our team wrote or programmed and into the bot that we deployed so initially there was no possibility of something unexpected happening correct you developed something that was a closed system you knew exactly for this question I'm going to get this answer y the problem began she
told us after a healthare technology company she and her team had partnered with named Cass took over the programming she says Cass explained the harmful messages appeared when people were pushing Tess's question and answer feature what's your understanding of what went wrong my understanding of what went wrong is that at some point and you'd really have to talk to Cass about this but that there may have been generative AI features that were built into their platform and so my best estimation is that these features were added into this program as well Cass did not respond
to multiple requests for comment does your negative experience with Tessa know being used used in a way you didn't design does that sour you towards using AI at all to address mental health issues I wouldn't say that it turns me off to the idea completely because the reality is that 80% of people with these concerns never get access to any kind of help and Technology offers a solution Not the Only Solution but a solution social worker Monica osto who runs a nonprofit Eating Disorders organization was in the early stages of developing her own chatbot when
patients told her about problems they had with Tessa she told us it made her question using AI for mental health care I want nothing more than to help solve the problem of access because people are dying like this isn't just somebody's sad for a week this is people are dying and at the same time any chat bot could be in some ways a ticking Time Bomb right for a smaller percentage of people especially for those patients who are really struggling osto is concerned about losing something fundamental about therapy being in a room with another person
the way people heal is in connection and they talk about this one moment where you know when you're as a human you've gone through something and as you're describing that you're looking at the person sitting across from you and there's a moment where that person just gets it a moment of empathy like you just get it like you really understand it I don't think a computer can do that unlike therapists who are licensed in the state where they practice most mental health apps are largely unregulated are there lessons to be learned from what happened so
many lessons to be learned chat Bots especially specialty area chat Bots need to have guard rails it can't be a chatbot that is based in the Internet it's tough right because the closest systems are kind of constrained and they may be right most of the time but they're boring eventually right people stop using them yeah they're predictive because if you keep typing in the same thing and it keeps giving me the exact same answer with the exact same language I me who wants to do that protecting people from harmful advice while safely harnessing the power
of AI is the challenge now facing companies like wobot health and its founder Allison Darcy there are going to be missteps if if we try and move too quickly and my big fear is that those missteps ultimately undermine public confidence in the ability of this Tech to help at all but here's the thing we have an opportunity to develop these Technologies more thoughtfully um and so you know I hope we I hope we take it