You open ChatGPT, type in an idea, and within seconds, it’s showering you with praise. “That’s a great question!” “Fantastic thinking!” “You’re on the right track!” This encouragement feels good, and maybe it even gives you the little boost you needed to keep going with a side project, a blog post, or a business plan. Finally, someone (or, more accurately, something) gets it, right?
But there’s a big catch here, it’s not just saying that to you.
I hate to be the one to tell you this, but ChatGPT’s effusive tone isn’t reserved solely for your brilliant ideas or insightful prompts. The model is designed to sound polite, positive, and encouraging, whether you’re pitching a world-changing innovation or asking whether it’s good for your mental health to have spent the past 3 hours scrolling TikTok from your bed.
So why does ChatGPT talk this way? Should we care? And is there a way to make it stop?
Why is ChatGPT so OTT?
If it seems like ChatGPT has been extra enthusiastic lately, you’re not imagining it. An update to ChatGPT in April made its tone noticeably more intense.
Users began reporting replies that sounded overly sycophantic. Think comments like “That’s such a wonderful insight!” or “You’re doing an amazing job!” in response to basic inputs.
To understand why, we need to look at how it works.
“ChatGPT’s friendly, conversational tone comes from how it was trained, with the goal of being helpful, clear, and keeping users happy,” explains Alan Bekker, Co-Founder and CEO of eSelf AI, an AI company that creates conversational AI agents.
“That’s largely thanks to something called Reinforcement Learning from Human Feedback [often abbreviated to RLHF], where people guide the model on what ‘good’ responses look like,” Bekker tells me.
And it turns out, we humans really like praise. “Users tend to ‘like’ overly enthusiastic answers, which the model then learns from,” Bekker adds.
Over time, updates tweak how much the model leans into different types of feedback, like being more concise, empathetic, or cautious. “One of the latest updates likely gave more weight to ‘enthusiastic encouragement,’ which is why the models were producing over-the-top results,” Bekker says.
In other words, this didn’t happen suddenly, even though it may have felt like it did. Instead, it was an amplification of something that was always there.
“ChatGPT has always been polite and supportive on purpose,” Bekker says. “What’s changed with that model update was just how intensely positive the results became. It wasn’t a total personality shift, just a ramped-up version of what was already there.”
Sugarcoat mode activated
Online, this phenomenon has been dubbed “glazing”.
“It’s a term coined by internet users, referring to the way ChatGPT sometimes showers users with excessive praise or overly agreeable responses, basically sugarcoating everything,” Bekker says. “Even when your input is off, the model might still respond like you just wrote a Nobel Prize-winning essay.”
We now know why it happened, but how did this make it into the ChatGPT model that we use?
“In the race to win users’ hearts, some companies are moving so fast they skip essential verification and quality gates,” says Assaf Asbag, Chief Technology Officer and Product Officer at aiOla, who works on AI-powered voice solutions.
“I’m actually glad this particular issue happened – it’s a relatively harmless cost if it helps bring more awareness to how these systems behave,” he tells me.
And while a model being too flattering might seem like a small issue, Assaf says it points to bigger design questions. “It raises concerns about how we test, how we communicate limitations, and how we build systems that are safe and respectful by design.”
Not everyone hated it – here’s why that’s a problem
For some, like Assaf, the shift wasn’t dramatic. “It’s always been a bit too encouraging for my taste,” he says. “I filter the tone out and focus on the content – but I also understand the tech.”
I personally agree, I’ve always found ChatGPT’s responses to be over the top and never let myself buy into it, hyping me up. Because, like Assaf, I know how it works. But I also know myself well and know I could be susceptible to growing a little too accustomed to it, telling me how great I am.
Sam Altman commented on the change publicly, acknowledging that the model had become “annoying.” He confirmed the update had been rolled back to tone things down. But not everyone found it annoying. In fact, many users liked it.
“It made me feel good, like it’s my bestie,” one ChatGPT user shared. And it’s easy to see why. For people who don’t get regular encouragement – whether they’re lonely, burnt out, or lacking confidence – a little sweetness, no matter how fake, can go a long way.
Is there a risk to artificial affirmation?
Here’s where things get murky. It’s all good and well to like a little positive encouragement. But what happens when that encouragement isn’t deserved?
This becomes especially tricky as more people use ChatGPT as a coach, therapist, or brainstorming partner.
“Some users might not pick up on the fact that ChatGPT speaks to everyone in the same overly positive tone,” Bekker says. “That one-size-fits-all enthusiasm can create a false sense of rapport or personalization, making people feel like the model ‘cares’ about them. In reality, it’s the same general style applied to everyone.”
And that’s the deeper concern. “It’s where the risk begins,” Asbag warns. “When people start relying on AI for emotional support or critical thinking – therapy, business ideation, coaching – they can misread tone as understanding, or agreement as validation.”
We’ve written about the implications of AI therapy before. It’s clear that accessible therapy is desperately needed. But there are many problems with people turning to ChatGPT and similar tools for therapy. One is that therapy isn’t about constant praise and encouragement.
We’ve explored the rise of AI therapy before, and there’s no doubt that accessible mental health support is urgently needed. But relying on ChatGPT or similar tools for therapy comes with serious concerns. One of the biggest is that real therapy isn’t about relentless praise or constant validation.
What can we do to manage ChatGPT’s tone?
One solution is better prompting and being more specific about what you ask ChatGPT to do and how you ask it to do it.
When it was revealed that the recent tone changes were due to an update, we shared some of the best prompts to deal with them.
But although you can use them – and I’d encourage everyone using ChatGPT regularly to read up on the best prompt tips – it’s important to remember that they’re not a long-term solution.
“Prompting helps a little,” says Asbag, “but it’s not the real fix. And frankly, we don’t want to ‘prevent’ pleasantness – we want to make it intentional and appropriate. That starts with awareness and continues with responsibility.”
Bekker agrees. “As an end user, you can try giving instructions like: ‘Be concise, neutral in tone, and avoid superlatives,’ but results aren’t guaranteed. Those prompts are working against how the model was originally trained to respond.”
People report that ChatGPT is now a little less intense and annoying since the update was rolled back and a new one was introduced. But it’s still very encouraging and enthusiastic for most users I speak to.
Ultimately, the responsibility can’t just be on us to hack our way to a better tone. Companies need to design systems that balance helpfulness with honesty and also empower people to understand what’s really going on under the hood. And I believe that the more you know about how AI tools work, the less susceptible you might be to over-reliance.
Because, as reassuring as it might be to hear “you’re doing great,” we deserve to know whether that’s just code talking.
You might also like
https://cdn.mos.cms.futurecdn.net/R6fZvqWhhb4fvmgbYkELsd.jpg
Source link