Paige Henley
Published on: September 23, 2025
OpenAI is preparing stricter safety features for ChatGPT as it faces mounting lawsuits and scrutiny over teen protection. CEO Sam Altman confirmed the company will soon require users to verify their age if it suspects a user is under 18, saying the changes are meant to “prioritize safety ahead of privacy and freedom for teens.”
“When you log in to ChatGPT, a banner will appear asking you to verify your age,” the company explained. “You will have 60 days to complete this process, after which your access to ChatGPT will be blocked until you successfully complete the age verification process.”
OpenAI will rely on third-party service Yoti to perform the checks. “You will be asked to enter the necessary details to confirm your age,” the post continued. “Depending on the method you choose, you may be asked to take a selfie, upload a valid ID, or use the Yoti app. Once your age is verified, you will be redirected to ChatGPT and can continue using the service as usual.”
The system will automatically place under-18 users into a restricted version of ChatGPT, which blocks sexual content and adds safeguards. Parents will soon be able to link accounts to monitor chats, disable history, enforce blackout hours, and receive alerts if the AI detects signs of acute distress. OpenAI noted that in some cases, “we may involve law enforcement as a next step.”
The rollout comes as lawmakers question whether AI can reliably predict age. Researchers warn that language-based cues are easily manipulated, while recent lawsuits accuse ChatGPT of failing to prevent harm in long sessions with vulnerable teens.
Despite concerns about privacy trade-offs, Altman stood by the decision. “Not everyone will agree with how we are resolving that conflict,” he said, “but we believe it is a worthy tradeoff.”