Scattered Clouds
clouds

18 April 2024

Amman

Thursday

71.6 F

22°

Home / Gotcha

ChatGPT’s new GPT-5.3 Instant model will stop telling you to calm down

05-03-2026 02:10 PM


Ammon News - Take a breath, stop spiraling. You’re not crazy, you’re just stressed. And honestly, that’s okay.

If you felt immediately triggered reading these words, you’re probably also sick of ChatGPT constantly talking to you as if you’re in some sort of crisis and need delicate handling. Now, things may be improving. OpenAI says its new model, GPT-5.3 Instant, will reduce the “cringe” and other “preachy disclaimers.”

According to the model’s release notes, the GPT-5.3 update will focus on the user experience, including things like tone, relevance, and conversational flow — areas that may not show up in benchmarks, but can make ChatGPT feel frustrating, the company said.

Or, as OpenAI put it on X, “We heard your feedback loud and clear, and 5.3 Instant reduces the cringe.”

In the company’s example, it showed the same query with responses from the GPT-5.2 Instant model compared with the GPT-5.3 Instant model. In the former, the chatbot’s response starts, “First of all — you’re not broken,” a common phrase that’s been getting under everyone’s skin lately.

In the updated model, the chatbot instead acknowledges the difficulty of the situation, without trying to directly reassure the user.

The insufferable tone of ChatGPT’s 5.2 model has been annoying users to the point that some have even canceled their subscriptions, according to numerous posts on social media. (It was a huge point of discussion on the ChatGPT Reddit, for instance, before the Pentagon deal stole the focus.)

People complained that this type of language, where the bot talks to you as if it assumes you’re panicking or stressed when you were just seeking information, comes across as condescending.

Often, ChatGPT replied to users with reminders to breathe and other attempts at reassurance, even when the situation didn’t warrant it. This made users feel infantilized, in some cases, or as if the bot was making assumptions about the user’s mental state that just weren’t true.

As one Reddit user recently pointed out, “no one has ever calmed down in all the history of telling someone to calm down.”

It’s understandable that OpenAI would attempt to implement guardrails of some kind, especially as it faces multiple lawsuits accusing the chatbot of leading people to experience negative mental health effects, which sometimes included suicide.

But there’s a delicate balance between responding with empathy and providing quick, factual answers. After all, Google never asks you about your feelings when you’re searching for information.




No comments

Notice
All comments are reviewed and posted only if approved.
Ammon News reserves the right to delete any comment at any time, and for any reason, and will not publish any comment containing offense or deviating from the subject at hand, or to include the names of any personalities or to stir up sectarian, sectarian or racial strife, hoping to adhere to a high level of the comments as they express The extent of the progress and culture of Ammon News' visitors, noting that the comments are expressed only by the owners.
name : *
email
show email
comment : *
Verification code : Refresh
write code :