OpenAI CEO admits ChatGPT’s personality is ‘too annoying’

    By Monica J. White
Published April 28, 2025

Have you noticed that ChatGPT has gotten a little personal lately? It’s not just you. OpenAI’s CEO, Sam Altman, admitted last night that the last couple of updates to GPT-4o have affected the chatbot’s personality, and not in a good way.

If you use ChatGPT often enough, you might have noticed a shift in its behavior lately. Part of it might be down to its memory, as in my experience, the chatbot addresses you differently when it doesn’t rely on past chats to guide the way you’d (potentially) want it to respond. However, part of it is just that somewhere along the way, OpenAI has made ChatGPT a so-called “yes man” — a tool that agrees with you instead of challenging you, and sometimes, the outcome can be a touch obnoxious.

Sam Altman, OpenAI’s CEO, seems aware of the change. He referred to the chatbot’s personality as “too sycophant-y” and “annoying,” all the while pointing out that “there are some very good parts of it.” Altman also said that OpenAI is working on fixes as soon as possible, and some might roll out as soon as today, with others to follow later this week.

This prompted one user to respond, asking whether it’d be possible to go back to the old ChatGPT personality — the one that was polite but not a full-on cheerleader. As an alternative, the user asked whether it’d be possible to distinguish between the “old” and “new” personalities. Altman responded: “Yeah, eventually we clearly need to be able to offer multiple options.” That’d be an interesting and useful addition to ChatGPT.

Fully getting rid of ChatGPT’s friendly, encouraging traits would backfire, too — no matter how annoying they are. While many use ChatGPT for work and research, chatbots have now permeated our reality, resulting in many people chatting with them to discuss their problems or fears. Having just one personality setting is, indeed, very limiting in such situations.

With that said, it’s true that ChatGPT is getting a tad too personal. In two recent conversations, it referred to me as “sweetheart,” and I’m not going to lie, that made me feel really uncomfortable. Let’s hope that OpenAI finds a way to dial it back and make it more of a useful tool than something that tries too hard to be our friend.

Related Posts

New study shows AI isn’t ready for office work

A reality check for the "replacement" theory

Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns

The paper, published on arXiv with the evocative title Reasoning Models Generate Societies of Thought, posits that these models don't merely compute; they implicitly simulate a "multi-agent" interaction. Imagine a boardroom full of experts tossing ideas around, challenging each other's assumptions, and looking at a problem from different angles before finally agreeing on the best answer. That is essentially what is happening inside the code. The researchers found that these models exhibit "perspective diversity," meaning they generate conflicting viewpoints and work to resolve them internally, much like a team of colleagues debating a strategy to find the best path forward.

Microsoft tells you to uninstall the latest Windows 11 update

https://twitter.com/hapico0109/status/2013480169840001437?s=20