Friday, May 16, 2025
HomeHealthThe Dangers Of Using AI As Therapy

The Dangers Of Using AI As Therapy

Before you turn to AI for therapy, you may want to know more about how it works. It may seem convenient and easy, but AI presents potential ethical concerns and clinical dangers. DeepSeek, Jasper AI, and Copilot scrape the internet faster than ever before. These tools don’t get tired, and burnout is not an issue for them. Programs like ChatGPT do not require out-of-pocket costs, insurance restrictions, transportation, or appointment waiting times. If you have a crisis in the middle of the night, your smartphone is there for you.

A recent study from the MIT Technology Review found that AI could potentially be a useful clinical tool in treating depression and anxiety. However, the study also said it didn’t serve as validation for the spate of bots flooding the market. These chatbots, powered by this technology, can feel like they’re your friend, your loved one, or even your doctor.

However, with any algorithm, we must first consider who programmed it and whether their biases (conscious or unconscious) were part of the programming process.

But while venting to a non-human is easy and cheap, it might not be the best way for everyone to seek mental wellness. So, we asked mental health experts about the potential impacts of using AI as a therapy provider.

Efficacy Concerns

Chatbots might be able to provide information and instruction, but they do not perfectly replicate the experience of speaking with a human person. Experts have reservations about technology’s ability to grasp specific nuances of the human experience.

Chatbots have never been invited to parties that they would rather skip. Or have had to weigh whether or not it was appropriate to kiss someone on the second date.

Teran pointed out that chatbots can allow a person to avoid human interaction, which can be risky for some people with certain mental health challenges. For example, if someone has problems with putting themselves out there, a chatbot can be a complicated tool. >

“If you are practicing isolation, if you are depressed, if you are overwhelmed, and you’re just like, I can’t handle it, I don’t want to speak to a person. I’d rather speak to the bot. How are we converting [them] from isolation,” she said.

“I think AI can really support dynamics that many of us have developed, which is escaping hard feelings by seeking those [dopamine] hits, rather than how can I build and rebuild the tolerance to navigate hard feelings to move through them, to work with them with people,” added Sydnee R Corriders, LCSW.

Privacy Concerns

Licensed healthcare providers are forced to follow rules and adhere to ethical standards. When they fail to do so, they face consequences. Sometimes, those consequences include losing their licenses and livelihoods. At other times, they have to deal with guilt or embarrassment. Technology does not have to worry about being dragged on the internet. It can not cry because someone yelled or made it feel bad. Nor will it starve if it can not get more clients.

AI is also developing so quickly that regulation is struggling to keep up. Guidelines and practices concerning the technology are not uniform or comprehensive.

“One of the biggest risks is that it dehumanizes the whole process of healing and growth,” said Dr. Dominique Pritchett, PsyD, LCSW. “AI does not have an emotional connection to us. It lacks empathy.”

Data input to chatbots is vulnerable to being used in a number of ways. Information about the thoughts and feelings of those seeking help from chatbots could be used to market to them or discriminate against them. Hackers are also a threat.

“The risks and costs are much greater than the benefits,” said Sydnee R. Corriders, LCSW. “I am curious where that data goes and how it’s used.

Attachment Concerns

Megan Garcia, a bereaved Florida parent, filed a 2024 lawsuit after alleging that her teenage son’s “inappropriate” relationship with a chatbot led to his suicide. The fourteen-year-old was communicating with the chatbot shortly before he took his own life. “This is a platform that the designers chose to put out without proper guardrails, safety measures, or testing, and it is a product that is designed to keep our kids addicted and to manipulate them,” Garcia told CNN in an interview. In Texas, a pair of parents filed a lawsuit after a chatbot implied to their seventeen-year-old child that their rules concerning screen time were so strict he might be justified in using violence against them.

The dangers connected to chatbots transcend cultures. In 2023, a Belgian man committed suicide after chatting extensively with a chatbot.

A February article in the MIT Technology Review revealed that a chatbot instructed a man to kill himself. It reportedly told him, “You could overdose on pills or hang yourself.”

“I am curious about what their bottom line is and what their goals are,” Corriders said of companies aiming to simulate therapy via technology. “And what I have found and seen is that it’s often around money.”

Bias Concerns

Some chatbots have been criticized for being agents of confirmation bias. Because these tools are tailor-made for the user, there are concerns that they could dig them deeper into bad situations.

A 2024 article in The British Journal of Psychiatry reported, “There is evidence that some of the most used AI chatbots tend to accentuate any negative feelings their users already had and potentially reinforce their vulnerable thoughts, leading to concerning consequences.”

“AI is a great tool for feeling validated, and I think that is a major initial part of therapy, to feel validated, but it’s not the only part,” said Corriders.

Frontiers in Psychiatry reports, “Algorithmic bias is a critical concern in the application of AI to mental health care.” In other words, the algorithms can make assumptions based on gender and race.

Dr. Shané P. Teran, MSW, LCSW, Psy. D., stated that there are elements of the human experience that can not be analyzed through artificial methods. “When we’re even talking about cultural differences, racial differences, ethnic differences, the whole list of things that would make a person diverse and different, you have to consider that they can’t account for that. That can’t be programmed,” she said.

“We as humans train it to reinforce, perhaps certain beliefs,” said Corriders. The concerns connected to chatbots do not mean that they are not useful. Pritchett suggested that those interested in the technology use it to streamline a search for more traditional therapeutic options. “I would recommend that they use AI to help them identify the resources that are in their area.”

In other words, proceed with caution.

Resources

MIT Technology Review

Frontiers in Psychiatry

JMIR Mental Health

The British Journal of Psychiatry

Frontiers of Psychiatry

The post The Dangers Of Using AI As Therapy appeared first on Black Health Matters.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments