ChatGPT could ask for ID, says OpenAI chief

    By Trevor Mogg
Published September 17, 2025

OpenAI recently talked about introducing parental controls for ChatGPT before the end of this month.

The company behind ChatGPT has also revealed it’s developing an automated age-prediction system designed to work out if a user is under 18, after which it will offer an age-appropriate experience with the popular AI-powered chatbot.

If, in some cases, the system is unable to predict a user’s age, OpenAI could ask for ID so that it can offer the most suitable experience.

The plan was shared this week in a post by OpenAI CEO Sam Altman, who noted that ChatGPT is intended for people 13 years and older.

Altman said that a user’s age will be predicted based on how people use ChatGPT. “If there is doubt, we’ll play it safe and default to the under-18 experience,” the CEO said. “In some cases or countries, we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff.”

Altman said he wanted users to engage with ChatGPT in the way they want, “within very broad bounds of safety.”

Elaborating on the issue, the CEO noted that the default version of ChatGPT is not particularly flirtatious, but said that if a user asks for such behavior, the chatbot will respond accordingly. 

Altman also said that the default version should not provide instructions on how someone can take their own life, but added that if an adult user is asking for help writing a fictional story that depicts a suicide, then “the model should help with that request.” 

“‘Treat our adult users like adults’ is how we talk about this internally; extending freedom as far as possible without causing harm or undermining anyone else’s freedom,” Altman wrote.

But he said that in cases where the user is identified as being under 18, flirtatious talk and also comments about suicide will be excluded across the board.

Altman added if a user who is under 18 expresses suicidal thoughts to ChatGPT, “we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm.”

OpenAI’s move toward parental controls and age verification follows a high-profile lawsuit filed against the company by a family alleging that ChatGPT acted as a “suicide coach” and contributed to the suicide of their teenage son, Adam Raine, who reportedly received detailed advice about suicide methods over many interactions with OpenAI’s chatbot.

It also comes amid growing scrutiny by the public and regulators over the risks AI chatbots pose to vulnerable minors in areas such as mental health harms and exposure to inappropriate content.

Related Posts

Your Claude chats just got more powerful with interactive app support

Instead of offering text-only responses, Claude can now act as a full-on workspace, letting you draft Slack messages, build project boards, design mockups, and more. Built on top of the Model Context Protocol (MCP), which Anthropic introduced in 2024 as a standard for how AI and apps talk to each other, the feature is designed to scale over time, with support for additional tools and platforms expected soon.

Microsoft has released an emergency Windows 11 update to fix crashing apps

Some of the problems were serious enough that Microsoft even advised certain users to uninstall the update altogether. Now, Microsoft has stepped in again with a second out-of-band update, aiming to finally steady the ship.

The rise of adaptive displays: How Lenovo is redefining productivity & play

"If you look at the history of displays, they have always been passive surfaces that simply rendered whatever the device sent to them," says George Toh, Vice President and General Manager of Lenovo’s Visual Business Unit. "What is changing now is that screens are becoming adaptive interfaces that react to what the user is doing in real time.”