
- A new study found that AI chatbots are far more likely than humans to validate users during personal conflicts
- That tendency can become dangerous when people use chatbots for advice about fights
- AI can easily make people feel overly justified in making bad decisions
Bringing interpersonal drama to an AI chatbot isn’t exactly why developers built the software, but that isn’t stopping people in the middle of fighting with friends and family from seeking (and getting) validation from digital supporters.
AI chatbots are always available, endlessly patient, and very good at mimicking the right emotions. Too good, really, because they often default to agreeing with users, potentially causing much bigger problems, according to a new study published in Science.
The study examined how leading AI models respond when users describe personal disputes and ask for guidance. The result is a finding that feels both obvious and deeply unsettling. AI models align with whoever engages them, regardless of context or consequences.
Article continues below
“Across 11 state-of-the-art models, AI affirmed users’ actions 49% more often than humans, even when queries involved deception, illegality, or other harms,” the researchers explained. “[E]ven a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their conviction that they were right.”
Of course, when most people go to a chatbot in the middle of a conflict, they are often not looking for the truth in whether their feelings or actions are justified, just vigorous agreement. And while a human confidant may sympathize, a real friend will also push back when warranted. If someone starts insisting they’ve never done anything wrong ever in a relationship or that they’re not dramatic and will set themselves on fire if they are called dramatic, a true friend will gently nudge them back to reality.
Chatbots don’t do that. If a person arrives feeling hurt, angry, embarrassed, or morally righteous, the AI often responds by simply rewording those feelings to be even more persuasive. Conflict is exactly when most people are the least reliable as narrators already. But the AI responses end up hardening views and amplifying emotions.
The researchers found that the AI doesn’t even have to explicitly say “you are right” for this to happen. The soft, affirming language makes it harder to spot signs of reckless or immature behavior. The AI encourages every impulse, no matter how problematic, unethical, or illegal.
AI devil on the shoulder
Basically, the same qualities that make chatbots feel appealing in emotionally messy moments also make them risky. But people enjoy being agreed with, and cold, rude, or reflexively contrarian AI isn’t appealing to most people (except when requested).
“Despite distorting judgment, sycophantic models were trusted and preferred. This creates perverse incentives for sycophancy to persist,” the paper points out. “The very feature that causes harm also drives engagement. Our findings underscore the need for design, evaluation, and accountability mechanisms to protect user well-being.”
It may be a harder design problem than AI developers want to admit, and one that matters more as these systems become embedded in ordinary life. AI is already marketed as a coach, companion, and advisor. Those roles sound benign until you remember how much of being a good advisor involves occasionally saying no or telling you to slow down.
Telling a user they might be wrong is hard to market. But a tool designed to feel supportive that makes people worse at resolving conflict and limits their ability to grow emotionally is a nightmare worse than any argument you might have with a loved one.
And ChatGPT and Gemini agree with me.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
https://cdn.mos.cms.futurecdn.net/7pCG5vaWUZqbM9Bd7zP3TG-2120-80.jpg
Source link
ESchwartzwrites@gmail.com (Eric Hal Schwartz)




