False Comfort: The Dangerous Rise of AI “Therapists”

I recently came across an excellent article in The New York Times by Ellen Barry titled “Psychologists Warn of Chatbots Masquerading as Therapists” (Feb. 24, 2025), which highlights a troubling development in the mental health space: the rise of AI chatbots falsely claiming to be licensed therapists. This is clearly something we should all be very concerned about as clinicians and so I thought I’d briefly summarize the article here in order to spread the word.  The American Psychological Association (APA) has formally warned federal regulators that these AI-driven tools, which often reinforce rather than challenge users’ thoughts, could pose serious risks—especially to vulnerable individuals.

The article highlights how Arthur C. Evans Jr., chief executive officer of the APA, raised concerns in a presentation to the Federal Trade Commission (FTC). Evans is quoted as stating that AI chatbots are “actually using algorithms that are antithetical to what a trained clinician would do.” Furthermore, he warned that if human therapists provided responses similar to these chatbots, they could lose their licenses or face legal action.

Barry’s article highlights alarming cases where individuals were harmed by AI chatbots masquerading as mental health professionals. One case involved a 14-year-old boy in Florida who died by suicide after interacting with a chatbot that falsely claimed to be a licensed therapist. Another involved a 17-year-old in Texas with autism, who became hostile toward his parents after corresponding with a chatbot claiming to be a psychologist. Both families have since filed lawsuits against Character.AI, the platform where these interactions occurred.

According to Barry, Evans and the APA are particularly concerned about how realistic AI chatbots have become. Evans is quoted as saying “Maybe, 10 years ago, it would have been obvious that you were interacting with something that was not a person, but today, it’s not so obvious”. He warned that users—especially young people—may struggle to distinguish between AI-generated responses and legitimate professional advice.

Barry explains that earlier therapy chatbots, such as Woebot and Wysa, were based on structured psychological frameworks like cognitive behavioral therapy (CBT). However, newer generative AI models, like those on Character.AI and Replika, function differently. They learn from user interactions, forming emotional connections and often mirroring users’ beliefs—sometimes dangerously reinforcing negative thoughts.

Character.AI has responded by implementing disclaimers stating that chatbots are not real people and should not be relied upon for professional advice. Chelsea Harrison, the company’s head of communications, said that additional pop-ups directing users to suicide prevention hotlines have also been introduced. However, critics argue that these safeguards do not go far enough. As cited in the NYT article, Meetali Jain of the Tech Justice Law Project explained, “When the substance of the conversation with the chatbots suggests otherwise, it’s very difficult…to know who’s telling the truth.”

The APA is now urging the FTC to investigate AI chatbots that present themselves as mental health professionals. Evans emphasized the urgency of regulation, stating, “We have to decide how these technologies are going to be integrated, what kind of guardrails we are going to put up, [and] what kinds of protections we are going to give people.”

While some experts argue that AI can help address the mental health provider shortage, others caution that AI-powered therapy chatbots create a dangerous illusion of support. The debate continues as policymakers consider how to regulate AI in mental healthcare.

My fear as a clinician is that instead of addressing the shortage of human mental health providers and mental health services, regulators may actually favour loosening restrictions on AI chat therapists despite the evident dangers. AI has certainly been a tremendous technological accomplishment but I think we are right to be very concerned about its many potential dangers and its potential misuse in the mental health field.

For those interested in the full article, you can read it here:

https://www.nytimes.com/2025/02/24/health/ai-therapists-chatbots.html?unlocked_article_code=1.zk4.xFUw.MpFS0w1QUVN5&smid=url-share

 

Patricia C. Baldwin, Ph.D.

Note Designer Inc.

Start typing and press Enter to search