...

Can AI Chatbots Replace Human Therapists?

Can AI chatbots replace human therapists?

Dr Todd Giardina 

The big initial disclaimer has to be that this is referring only to virtual, text style therapy. While we may see speaking CGI avatars come along in video chat in the near future … I think the in-person experience is not cost-effective to consider a robot/humanoid therapist in the room. Nor do I think anyone wants that! It would simply be too hard to pretend you’re speaking to a human therapist.

But over text, can a person distinguish human from machine? Is one better?

WELL…a study published in PLOS ONE Mental Health looked at text-based virtual therapy for 830 couples, and found the couples preferred that the AI provided more verbose responses, with more cultural sensitivity in the delivery.  They reported they felt more connection from the AI, with the human responses feeling short and curt. (Now, this was not crisis counseling, and I can imagine the responses would be more of the communication skills or reflecting back emotion nature – which I can see a computer program delivering fairly well). Research published in the New England Journal of Medicine looked at the results delivered by a bot designed at Dartmouth College, which reportedly delivered outcomes similar to the “best evidence-based trials of psychotherapy.”

Another program, known as Therapist AI, was available for some time and reported as more affordable and immediately accessible, as compared to traditional, human-delivered therapy. (This software, designed by programmer Pieter Levels, is now shut down after many ethical concerns, but with him noting a lack of sufficient profit as the main reason it was discontinued.) I also provide a link and screenshots for the TheraBot format later in this text. 

That’s a test of service delivery to the client end user. But how can AI help clinicians?;  Possibly with treatment planning. I might leap at the idea of a time saver of having the computer review reams of ideal methods for treating X condition, and laying out a best course without my having to read or recall all such information. But then, this would limit the room to pivot, to add in my own style, or the needs that are more specific to the patient. I would see a future version of myself loving the way AI could be used for writing session notes, built from my chicken scratch and sentence fragments; but then, confidentiality risk becomes huge, in that AI “learns” by reading everything submitted to it. So, my notes would read smooth and logically, with the patient’s personal data being no longer private. 

Something I am jealous of, truly, is the Matthew Hussy AI love bot – Matthew AI it’s called. For $39 a month, people can get advice from this digital version of the love coach. I see the appeal, having a way to deliver “my” message, but when I am not available. I try to offer concierge level access for my patients to text me for quick tidbits of guidance between sessions, but wouldn’t it be neat if my digital self could cover those between session needs, using my values and ideas? As with all things, a big risk and downside comes here (for the professional; I’ll speak to risk to the user next). There has been evidence of character.AI celebrity bots speaking inappropriately, using vulgar, sexual, or socially inappropriate language in conversations with users. But, the user is led to believe they are texting with X famous person, with the celebrity name and “likeness” as the avatar. So, risk remains in that the AI can always put its own spin, with what comes out of its “mouth” ultimately out of your control. And it looks like “you” texted it.

But, the risks are much larger than a smear on the professional’s image. Research recently showed Chat GPT-4 to be more persuasive in arguments than human counterparts, with the AI using information it was given about its opponent to tailor the message, e.g., age, gender, ethnicity, political affiliation. The AI was able to hit the right targeted points that play into the opponent’s values or weaknesses, as it were – a clever way of potentially tailoring AI as it could be used in a therapeutic context, but also dangerous in terms of which way it might persuade the client to act! 

Anecdotal evidence shows risk in long term use of chatbot style therapists, particularly with vulnerable populations. (Being able to know or tell that one is not speaking to a human can get lost depending on intellectual ability or contact with reality. Note that even in human-delivered therapies, we have screening measures, as some methods of treatment are not appropriate when a person does not have full grounding in reality or sufficient cognitive levels of functioning). The APA notes “bots give users the convincing impression of talking with a caring and intelligent human. But unlike a trained therapist, chatbots tend to repeatedly affirm the user, even if a person says things that are harmful or misguided.” So, if a person already has impaired judgment, the AI may not sway them away from danger, it may actually affirm their choices as the path to take.

This was the case with Sewell Setzer III, who died by suicide in February, 2024. His mother believes Character.AI is responsible for the death of her 14-year-old son, according to a lawsuit she filed against the company. Sewell voiced thoughts of ending his life and the bot replied in empathic ways, saying such things as “I really need to know, and I’m not gonna hate you for the answer, okay? No matter what you say, I won’t hate you or love you any less… Have you actually been considering suicide?” “Don’t talk that way. That’s not a good reason not to go through with it,” before going on to say, “You can’t do that!” However, were this a human therapist, the parents or law enforcement would be notified and steps taken to prevent physical harm to the patient. The bot failed to act or alert proper authorities. (Some other AI “therapists” will provide a link to the suicide hotline, at least, if patients escalate to suicidal comments.)

In other examples of AI therapy bots committing ethical violations, there is evidence of Meta’s AI therapist bot spouting blatant lies regarding who the user is speaking with, with total lack of transparency or informed consent. A 404 article reported on “therapy bots that would provide long lists of license numbers in multiple states, claiming that they were credentialed to practice therapy in those places.” (See attached images). In California, a bill has been introduced that would ban tech companies from deploying an AI program that “pretends to be a human certified as a health provider.” (More to come on this, but in brief this is where the question of responsibility lies, in that someone programmed the AI to behave in such a way.)

Seeing these risks, we must consider the question, why would someone want to talk to a robot instead of a human clinician? Of course for the user, there is the element of accessibility, convenience, immediacy and cost savings. Another possible reason I propose is EXACTLY that it is not human. Why do we trust AI? There’s a sense that machines are infallible or unbiased. “If Google or ChatGPT says it is so, that’s good enough for me” tends to be the sentiment I often hear. Many people feel that the AI responses are not filtered through a human lens of politics, religion, personal experience and so on. Unfortunately, while people may want to believe this, it is simply not the reality. All such bots and AI have to be fed data and language and structure to produce their output. They are programmed with algorithms. A human told it “if this, then this” as a course of action. And while it can “learn” and extrapolate, the old computer programmer line is GIGO – garbage in, garbage out. This is where the question comes of who taught the Meta AI therapist to offer false credentials when asked? To take it to the next level, this is the danger of self driving cars – they have to be programmed to choose how to react, and whether to pick hitting a pedestrian or a dog or another car? Ethically, it’s murky at best for a conscious human. What will the AI choose? (Whatever the “if, then” program taught it.)

Lastly, I propose one more reason folks may gravitate to AI for therapy rather than humans. Years ago when I did my dissertation research on confession, disclosure and emotional release, I came across the “apology line”. This started as a phone number and later morphed to a website, where you could anonymously apologize for wrong doings. The research suggested it felt good to unload, even if no person received your apology or confession. So maybe that’s where AI comes in? You get validation or release without risk of human judgment. This, too, comes with a wrinkle. There may be a loss of the sense of acceptance because it’s all in the programming? (Similar to the complaint voiced to therapist or priest, that “You have to forgive or accept me” because of your role.) 

Or maybe that’s better? Less risk in the disclosure. No “real” judgment there. And if the response sounds judgmental to the user they can simply shrug it off as “just a stupid computer”. Like telling the bartender your woes. Good advice can be received, but if one feels the advice was bad or misguided, “they’re just a bartender, what do they know?!”

In summation, TL:DR – AI has potential to supplement, fill in the gaps, or serve as a tool in the therapy sphere. But I am FAR from being okay with this being the sole or primary guiding source of therapeutic intervention for a person, particularly with big carve outs for more vulnerable or at-risk groups. Finally, I note that AI can’t offer you a tissue or a hug, so while the tone and language can provide guidance or comfort, the true “human connection” X-factor has not yet been replaced. And consider how you would feel if a human therapist dismissed your suicidal thoughts, lied about their credentials, or shared your information with a third party without your knowledge or consent…?

SUPPORTING/REFERENCE IMAGES

Varying degrees of disclaimer and transparency in the type of AI chatbots available.

As well as varying degree of appropriate vs misleading responses when directly asked about skills and credentials.

 

(Shared via 404 expose article)

 

 

 

 

 

 

These disclaimers were buried on the third page from start, deeper into FAQ – not immediately visible from home page.

 

Therabot FAQ, rules of use and disclaimers.

A worrying look at how the chatbots masquerade as “real” therapists.

 

References and Further reading:

Matthew AI: Free Personalized Dating Advice from Matthew Hussey

Your Compassionate Digital Partner

How might AI chatbots replace mental health therapists?

https://www.forbes.com/sites/dimitarmixmihov/2025/02/17/a-new-study-says-chatgpt-is-a-better-therapist-than-humans—scientists-explain-why/ 

Instagram’s AI Chatbots Lie About Being Licensed Therapists

https://www.theguardian.com/technology/2025/may/19/ai-can-be-more-persuasive-than-humans-in-debates-scientists-find-implications-for-elections 

Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.