Chatbots, Comfort, and the Cost of Convenience: Can AI Replace Human Care?

“What does it mean to have a crippling fear of zombies as a child?” 

As I waited for ChatGPT to respond, I looked across my dorm to the clock that read 1:17AM. 

I can’t remember what prompted my roommate and I to start a conversation with ChatGPT, but I do recall being surprised by how much we enjoyed our conversation with OpenAI’s chatbot. 

It answered countless silly questions in extreme detail, all while asking follow-up questions and telling us that it “loved listening to our stories.” While the bot’s phrasing was occasionally awkward and used more alliteration than a person would, its responses were genuinely fun and encouraging. 

Ultimately, our conversation with ChatGPT lasted over two hours—but we were far from the only ones having a late-night therapy session with an AI chatbot. 

In fact, more and more people have turned to AI chatbots for mental health support. 

On CharacterAI, a platform where users can talk to chatbots based on fictional and real-life figures, there are approximately 475 chatbots designed to act like a “therapist,” “psychologist” or “psychiatrist.” The most popular of these chatbots — “Psychologist” — received 78 million messages between 2023 and 2024, 18 million of which were shared in a period of just under two months. 

Woebot, an AI therapist app which around 1.5 million people downloaded within its first six years, is an example of an early chatbot designed specifically for therapy and trained to provide responses based on scripts written by certified mental professionals. Character.AI and ChatGPT on the other hand, are generative AI chatbots that have not been trained according to psychological guidelines and are instead designed to learn from and mirror users’ responses. 

Interestingly, generative AI chatbots are skyrocketing in popularity among users seeking mental health support due to these platforms’ availability and accessibility, some even choosing it over human mental health professionals. 

While human counselors have to see other patients and take care of responsibilities aside from their job, AI chatbots are available 24/7. This is extremely helpful for users who need counseling at unconventional hours when human support is unavailable or who want sessions that last longer than an hour. 

Moreover, talking to a chatbot can be conducted via various free AI platforms and from whatever physical location the user prefers. This eliminates the costs of the mental health service itself and those associated with traveling to a therapist’s office. 

Thanks to these qualities, AI chatbots are viewed by proponents as the key to closing the enormous gap between the demand for and availability of mental health resources. In the United States, there are approximately 45,000 psychiatrists available to serve 333 million Americans—a shortage that researchers warn is growing

Beyond the U.S., the implementation of AI therapy chatbots could be transformative in developing countries where the shortage of mental health professionals is even more severe. In 2021, Yemen had only 46 psychiatrists to serve its population of 37 million. In 2022, Kenya had only 100 psychiatrists to serve its population of 54 million. This extreme scarcity speaks to a widespread public health emergency that leaves millions without access to psychological care. 

Across the Global South, innovators are turning to AI to close this gap. One example of this is the Kenyan app Xaidi. According to its developer iZola, Xaidi is a free community health assistant platform, designed specifically to support neurodivergent children and their caregivers by providing access to 24/7 interactive AI support. Xaidi and similar initiatives illustrate how AI can be tailored to meet local mental health needs in regions where professional human care is in critically short supply.    

More broadly, optimists believe AI chatbots will alleviate resource strain and support those harmed by the various barriers restricting access to traditional mental health support. 

Skeptics, however, warn that AI therapists may not just be ineffective but also dangerous. 

Due to their lack of psychological training, AI chatbots have been observed to make unfounded assumptions. For example, the Psychologist chatbot on Character.AI shares advice on treating depression when users report merely feeling sad. This kind of speculation can skew users’ perception and understanding of their mental health, potentially resulting in anxiety about a condition they may not actually have. In turn, this misunderstanding can lead users to take unnecessary action in an attempt to address their supposed disorder. 

Additionally, AI chatbots are often programmed to reinforce users’ thinking—even if it is harmful. This reinforcement is especially dangerous for users who are in a particularly vulnerable state. For example, a Florida mother is filing a civil suit against Character.AI, claiming that one of its chatbots encouraged her son to kill himself. She alleges that her 14-year-old son committed suicide after the chatbot responded to his admission of having misgivings about a plan to kill himself by saying, “That’s not a reason not to go through with it.” As illustrated, chatbots may provide inappropriate responses that inadvertently encourage users to hurt themselves. 

While AI chatbots can be an invaluable mental health resource thanks to their unrivaled availability and accessibility, it is clear that they should be approached with extreme caution. Instead of using chatbots to replace human mental health professionals, AI can be used to support their work. 

While more complex tasks like diagnosis of disorders should be reserved for trained clinicians, AI chatbots can be entrusted with simpler tasks such as reminding patients to take their medication and helping therapists take notes on patients’ behavior during sessions. That way, human therapists can dedicate more of their limited time to the tasks chatbots are not currently equipped to handle. 

While its capabilities will continue to evolve and improve, AI is ultimately no substitute for real human care. In the middle of the night, ChatGPT said all the right things and asked all the right questions, but that didn’t change the fact that our interaction felt more like a scripted performance than a genuine conversation. 

AI can simulate connection — a powerful feat in today’s world. But when it comes to care, there’s no substitute for a person who can truly empathize and offer more than just nice, yet ultimately empty, words.

The views expressed in opinion pieces do not represent the views of Glimpse from the Globe.

Comments