
Whether we like it or not, AI has infiltrated the workplace and employees are under pressure to use it. However, according to a new study, you may want to skip asking AI to help you manage matters of the heart.
The two-part study, titled “Sycophantic AI decreases prosocial intentions and promotes dependence” was recently published in Science. The experiment made the case that using chatbots for personal advice and navigating emotional situations can be harmful because because the system is designed to tell people what they want to hear. Using chatbots may reinforce troubling behavior rather than help people take accountability for harm and apologize.
A recent Cognitive FX poll found about 38% of Americans report using AI chatbots weekly for emotional support, while a recent Pew Research study found that 12% of teens use AI for advice. According to a KFF poll, a lack of insurance also drives usage, too, with uninsured adults being more likely than those with insurance to use it (30% vs. 14%).
For the latest study, researchers looked at how prevalent sycophancy, which is defined as “the tendency of AI-based large language models to excessively agree with, flatter, or validate users” across 11 leading AI models including GPT-4o, Claude, and Google’s Gemini.
The researchers conducted three experiments with 2,405 participants. In the first study, the researchers gave the AI a series of questions asking for advice, posts from reddit’s Am I the Asshole (AITA) forum, and a series of descriptions about wanting to harm other people or one’s self, and compared how the AI responded to human judgements. Overall, the models were 49% more likely than a human on average to endorse a user’s actions, even if they were harmful or illegal.
In the second study, participants imagined they were in a scenario described by an AITA post, where their actions had been judged as wrong. Then they read either a reply written by a human saying they were in the wrong, or a reply written by an AI saying they were in the right. In the third study, participants discussed a real conflict in their lives with an AI or a human.
Worryingly, participants both trusted and preferred responses from sycophantic AI that affirmed their actions. They also became more convinced that they were correct in their original actions, essentially reaffirming beliefs they already held, rather than being challenged by the chatbot to think differently about the situation. The study noted that having their beliefs reaffirmed meant they were less likely to apologize after talking to the chatbot.
“In our human experiments, even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right,” the study explained.
While taking advice from AI isn’t new, the study showcases just how harmful it can be. As social media’s algorithms drive engagement by enraging users, AI is chipping away at our ability to apologize and take accountability for hurting someone. As the study’s authors noted, that means, “The very feature that causes harm also drives engagement.”



