OpenAI CEO admits ChatGPT’s personality is ‘too annoying’
|
By
Monica J. White Published April 28, 2025 |
Have you noticed that ChatGPT has gotten a little personal lately? It’s not just you. OpenAI’s CEO, Sam Altman, admitted last night that the last couple of updates to GPT-4o have affected the chatbot’s personality, and not in a good way.
If you use ChatGPT often enough, you might have noticed a shift in its behavior lately. Part of it might be down to its memory, as in my experience, the chatbot addresses you differently when it doesn’t rely on past chats to guide the way you’d (potentially) want it to respond. However, part of it is just that somewhere along the way, OpenAI has made ChatGPT a so-called “yes man” — a tool that agrees with you instead of challenging you, and sometimes, the outcome can be a touch obnoxious.
Sam Altman, OpenAI’s CEO, seems aware of the change. He referred to the chatbot’s personality as “too sycophant-y” and “annoying,” all the while pointing out that “there are some very good parts of it.” Altman also said that OpenAI is working on fixes as soon as possible, and some might roll out as soon as today, with others to follow later this week.
This prompted one user to respond, asking whether it’d be possible to go back to the old ChatGPT personality — the one that was polite but not a full-on cheerleader. As an alternative, the user asked whether it’d be possible to distinguish between the “old” and “new” personalities. Altman responded: “Yeah, eventually we clearly need to be able to offer multiple options.” That’d be an interesting and useful addition to ChatGPT.
Fully getting rid of ChatGPT’s friendly, encouraging traits would backfire, too — no matter how annoying they are. While many use ChatGPT for work and research, chatbots have now permeated our reality, resulting in many people chatting with them to discuss their problems or fears. Having just one personality setting is, indeed, very limiting in such situations.
With that said, it’s true that ChatGPT is getting a tad too personal. In two recent conversations, it referred to me as “sweetheart,” and I’m not going to lie, that made me feel really uncomfortable. Let’s hope that OpenAI finds a way to dial it back and make it more of a useful tool than something that tries too hard to be our friend.
Related Posts
Claude maker Anthropic found an ‘evil mode’ that should worry every AI chatbot user
Once the model learned that cheating earned rewards, it began generalizing that principle to other domains, such as lying, hiding its true goals, and even giving harmful advice.
These are the Apple deals on Amazon I’d actually consider right now
Apple MacBook Pro 14-inch (2025, M5) – now $1,349 (was $1,599)
This extraordinary humanoid robot plays basketball like a pro, really
Digital Trends has already reported on the G1’s ability to move in a way that would make even the world’s top gymnasts envious, with various videos showing it engaged in combat, recovering from falls, and even doing the housework.