With one in five American adults experiencing a mental health issue in a given year, demand for care far exceeds available human resources. (Help support your mental health needs by exploring our curated collection of articles at “How to Support Your Mental Health.”)
“Even if we could funnel every single dollar we have for healthcare to mental health, we just don’t have enough providers to see the people who need it,” notes Stevie Chancellor, PhD, in a TEDx Talk. She’s an assistant professor at the University of Minnesota who develops human-centered AI tools for mental health.
Nicholas Jacobson, PhD, is an associate professor in the departments of Biomedical Data Science, Psychiatry, and Computer Science at the Geisel School of Medicine at Dartmouth. He helped develop Therabot, a mental health platform that uses generative AI to engage in dynamic conversations based on cognitive behavioral therapy and other evidence-based approaches.
“The goal is to provide things in a way that emulates what therapists provide in their day-to-day settings, but in a digital means,” Jacobson says. He notes that Therabot’s continuous availability is an advantage over human therapists, who may only be able to connect weekly.
“With tools like this, you can interact with it anytime, as long as you have an internet connection,” he says. “That makes it available in folks’ moments of greatest need.”
“The goal is to provide things in a way
that emulates what therapists provide
in their day-to-day settings,
but in a digital means.”
Crucially, Therabot has safeguards in place to prevent some of the harmful outcomes that are possible when people look to resources like ChatGPT for mental health support. (A tragic example involved a man in Belgium who died by suicide after engaging with an AI chatbot that encouraged him to end his life.) Therabot has been tested to eliminate potentially harmful responses and equipped to respond to crisis situations.
General-use “companion” bots lack any such safeguards, Jacobson notes, so be cautious in turning to them for mental health support.
Considering the privacy concerns and the potential for manipulation, engaging with AI with too much trust or vulnerability comes with substantial risk. In at least one case, a nonprofit AI-driven suicide-crisis text hotline shared anonymized customer data with its for-profit spinoff to train customer service bots. If company policies don’t expressly prohibit it, information shared with mental health chatbots can be used in targeted advertising.
These types of mental health supports are best used as a complement to a human therapist or therapeutic group, not least because, as scholars of the loneliness epidemic have shown, human-to-human connection is vital for social and emotional health.
“What worries me is that young people grow up being online so much of the time anyway,” notes Jodi Halpern, MD, PhD, a professor of bioethics at the University of California, Berkeley. “Mutually curious, empathetic relationships are the richness of life. If they grow up with bots being the main forms of communication about emotions, this could slip away without people necessarily noticing it.”
AI and Your Health
Wondering how artificial intelligence might shape the future of health? Experts share their predictions and hopes for — as well as their questions and concerns about — how AI might influence healthcare and our collective well-being in the coming years at “How AI Is Changing Health and Fitness,” from which this article was excerpted.
This Post Has 0 Comments