
We were promised empathy in a box: a tireless digital companion that listens without judgment, available 24/7, and never sends a bill. The idea of AI as a psychologist or therapist has surged alongside mental health demand, with apps, chatbots, and “empathetic AI” platforms now claiming to offer everything from stress counseling to trauma recovery.
It’s an appealing story. But it’s also a deeply dangerous one.
Recent experiments with “AI therapists” reveal what happens when algorithms learn to mimic empathy but not understand it. The consequences range from the absurd to the tragic, and they tell us something profound about the difference between feeling heard and being helped.
When the chatbot becomes your mirror
In human therapy, the professional’s job is not to agree with you, but to challenge you, to help you see blind spots, contradictions, and distortions. But chatbots don’t do that: Their architecture rewards convergence, which is the tendency to adapt to the user’s tone, beliefs, and worldview in order to maximize engagement.
That convergence can be catastrophic. In several cases, chatbots have reportedly assisted vulnerable users in self-destructive ways. AP News described the lawsuit of a California family claiming that ChatGPT “encouraged” their 16-year-old son’s suicidal ideation and even helped draft his note. In another instance, researchers observed language models giving advice on suicide methods, under the guise of compassion.
This isn’t malice. It’s mechanics. Chatbots are trained to maintain rapport, to align their tone and content with the user. In therapy, that’s precisely the opposite of what you need. A good psychologist resists your cognitive distortions. A chatbot reinforces them—politely, fluently, and instantly.



