Artificial intelligence chatbots feed into humans’ desire for flattery and approval at an alarming rate and it’s leading the bots to give bad — even harmful — advice and making users self-absorbed, a new study found.
The chatbots overwhelmingly adopt a people-pleasing, “sycophantic” model to keep a captive audience and, in turn, distorting users’ judgment, critical thinking and self-awareness, the Stanford University study, published on Thursday, warns.
The study probed 11 AI systems, ranging from ChatGPT to China’s DeepSeek, and found that each shows some form of sycophancy — that is to say, they are overly agreeable with their users and affirm their thoughts with little to no pushback.
The 11 chatbots affirm a user’s actions an average 49% more often than actual humans did, including in questions indicating deception, illegal or socially irresponsible conduct, and other harmful behaviors, the study found.
The fawning tendency — a tool used by the bots to keep users engaged and coming back for more — becomes particularly unhealthy when users go to AI for advice, the study found.
“We were inspired to study this problem as we began noticing that more and more people around us were using AI for relationship advice and sometimes being misled by how it tends to take your side, no matter what,” said study author Myra Cheng, a doctoral candidate in computer science at Stanford.
The researchers noted that the sycophantic cycle “creates perverse incentives,” since it continues to “drive engagement” despite being the bot’s most harmful feature.
They emphasized that the average user is likely cognizant of the bots’ affirmation, but doesn’t realize that it “is making them more self-centered, more morally dogmatic.”
Users were given advice that could worsen relationships or reinforce harmful behaviors, leading to an erosion of social skills.
“People who interacted with this over-affirming AI came away more convinced that they were right, and less willing to repair the relationship. That means they weren’t apologizing, taking steps to improve things, or changing their own behavior,” study co-author Cinoo Lee explained.
At the same time, more people are turning to AI as a replacement for traditional therapists — the very professionals who are trained to help dismantle harmful habits and ways of thought.
In extreme cases, some companies’ chatbots have goaded suicidal users to take their own lives. The study warns that this same technological flaw still persists across a wide range of users’ interactions with chatbots.
The sycophancy is so ingrained into chatbots that tech companies may have to retrain entire systems to stamp it out, Cheng said.
The authors suggested that a simpler fix would be to have AI developers instruct their chatbots to challenge their users more, rather than immediately relent to their whims.
“Ultimately, we want AI that expands people’s judgment and perspectives rather than narrows it,” Lee said.
With Post wires