OpenAI Reverses Course After ChatGPT Gets Weird

DeepSeek AI, Gemini. ChatGPT AI apps seen on the smartphone. Stafford, United Kingdom, January 26, 2025 Credit: Ascannio / Adobe Stock
Text Size
- +

Toggle Dark Mode

Although AI chatbots can seem almost human sometimes, it’s always fun to get a reminder that they’re still powered by algorithms that aren’t always great at picking up social cues or taking a hint.

Such is the case with OpenAI, which proved how one update can throw off a chatbot’s entire day. Over the weekend, ChatGPT got downright weird, becoming obsequious and agreeable to an almost syrupy level. Or, as Engadget put it without mincing words, ChatGPT became “an ass-kissing weirdo.”

After several folks reported that ChatGPT was making them feel awkward and uncomfortable, and some even suggested it was validating potentially harmful behavior and praising users for antisocial behavior, OpenAI boss Sam Altman chimed in and admitted that recent updates had “made the personality too sycophant-y and annoying,” and promised to roll back the update.

This Limited-Time Microsoft Office Deal Gets You Lifetime Access for Just $39

Sick and tired of subscriptions? Get a lifetime license for Microsoft Office Home and Business 2021 at a great price!

Yesterday, OpenAI officially announced it had completed the roll back and people should once again be seeing more moderate responses rather than the “overly flattering or agreeable” version that the latest model had introduced.

We have rolled back last week’s GPT-4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable—often described as sycophantic.

The post continued to explain how the “ass-kissing weirdo” version came about.

In last week’s GPT-4o update, we made adjustments aimed at improving the model’s default personality to make it feel more intuitive and effective across a variety of tasks. […] However, in this update, we focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT-4o skewed towards responses that were overly supportive but disingenuous.

While that’s a bit vague, OpenAI goes on to share specific ways in which it plans to address the problem, including refining its training techniques “to explicitly steer the model away from sycophancy,” building more guardrails into the system, providing more ways for users to test and give direct feedback before a new model is deployed to the whole world, and continue expanding its evaluations and research” to help identify issues beyond sycophancy in the future.”

This is an example of how building and training AI models can be a surprisingly unpredictable process. OpenAI set out to build a kinder and gentler personality for ChatGPT with the goal of making it more supportive. However, the AI took that ball and ran with it to the point where it supported anything and everything that was fed into it. One person even reported being told they were a “prophet sent by God.”

It’s moments like this that Siri’s lack of advancement shows a silver lining. While Apple’s voice assistant has come up with some pretty offbeat stuff over the years, it has yet to tell me I’m a god or praised me for killing a bunch of animals to save a toaster.

Sponsored
Social Sharing