OpenAI adds age prediction to ChatGPT to strengthen teen safety
OpenAI is rolling out age prediction on ChatGPT consumer plans to help determine whether an account likely belongs to someone under 18. Age prediction builds on protections already in place.
ChatGPT relies on an age prediction model that evaluates behavioral and account level signals. These include how long an account has existed, typical times of activity, usage patterns over time, and a stated age when one is provided. According to the company, deploying the model during live use allows teams to observe which signals improve accuracy and adjust the system over time.
When the model estimates an account may belong to a minor, ChatGPT automatically applies a set of safeguards. These protections are intended to reduce exposure to categories such as graphic violence, gory material, sexual or violent role play, depictions of self-harm, viral challenges linked to risky behavior, and content that promotes extreme beauty standards or unhealthy dieting.
The approach draws on expert input and academic research on child development, including known differences among teens in risk perception, impulse control, peer influence, and emotional regulation. It also noted ongoing work to improve these protections and address attempts to bypass safeguards. When age information is uncertain or incomplete, the system defaults to a safer experience.
Alongside these measures, parents can further customize a teen’s experience through parental controls. Available options include setting quiet hours, managing features such as memory or model training, and receiving notifications when signs of acute distress are detected.
In the EU, age prediction is scheduled to roll out in the coming weeks to account for regional requirements.