ChatGPT Adds New Age Prediction Feature to Protect Underage Users

OpenAI has begun rolling out a new age prediction system on its ChatGPT platform to identify users who may be under 18 and automatically apply additional safety protections. The move is part of the company’s broader efforts to ensure that younger users have a safer experience on the artificial intelligence chatbot as concerns about online safety and AI interactions among teens continue to grow.

The age prediction feature is being introduced on ChatGPT consumer plans and uses a combination of behavioural and account-level signals to estimate whether a user is likely under the age of 18. These signals include how long an account has existed, typical usage patterns over time, when the user is active, and any declared age provided at sign-up. When the system predicts that an account may belong to someone under 18, ChatGPT automatically applies a set of safeguards designed to limit exposure to sensitive content.

OpenAI said that the decision to introduce age prediction follows its ongoing commitment to protecting young users while allowing adults to access the full capabilities of the platform. According to the company, these protections are intended to offer a more restricted experience tailored to the needs and safety requirements of teens. Adult users who are mistakenly identified as minors can restore full access by verifying their age through a selfie-based process using a third-party identity verification service called Persona. This age verification can be initiated in the ChatGPT settings.

The safeguards that apply when an account is flagged as likely belonging to a minor focus on restricting access to content that could be harmful or inappropriate. This includes material involving graphic violence, sexual or violent role play, depictions of self-harm, viral challenges that could encourage risky behaviour, and content that promotes extreme beauty standards or unhealthy dieting. The aim is to create an environment that reduces the risk of negative experiences while still allowing younger users to benefit from the educational and creative aspects of ChatGPT.

OpenAI’s help centre documentation notes that age prediction is designed to default to a safer experience if signals are unclear or incomplete. In such cases, the model will err on the side of caution and activate teen-focused protections rather than risk exposing a potentially younger user to sensitive content. If an adult user prefers not to have age prediction applied, they can complete the age verification process at any time to prevent further prediction checks.

In addition to the automated age prediction system, OpenAI offers parental controls that allow families to further customise the experience for teen users. These controls include setting quiet hours for usage, managing features such as memory and model training, and receiving notifications if the system detects signs of acute distress. Parents can link their account to a teen’s account to help manage how ChatGPT responds and interacts.

The age prediction rollout is planned to expand to the European Union in the coming weeks as part of efforts to comply with regional regulatory requirements. OpenAI said it will monitor the implementation closely and refine the model over time to improve accuracy and reduce the likelihood of incorrect age categorisation.

Industry reaction to the introduction of age prediction and teen safeguards has been mixed. Some observers have welcomed the move as a proactive step toward addressing safety concerns associated with AI chatbots. Others have noted that automated age detection systems can raise privacy questions and may not always accurately classify users in a way that aligns with their actual age. Critics argue that any system that infers age based on usage patterns must strike a careful balance between protecting minors and preserving the rights of adult users.

OpenAI’s move to bolster teen protections through age prediction comes amid wider industry discussion about how technology companies address youth safety. Various platforms, including major social media and content providers, have introduced age-related controls and content restrictions to respond to concerns about inappropriate content exposure among minors. OpenAI’s age prediction work builds on earlier efforts to refine and expand safety features in its AI systems, including content moderation measures developed in consultation with experts and advocacy groups.

To address potential inaccuracies, users identified as likely minors can opt to confirm their age and regain full access to ChatGPT’s capabilities if they are 18 or older. Persona handles the verification process securely, with requirements that may include a live selfie or government-issued identification depending on country regulations. OpenAI does not receive or store the underlying biometric data itself, only the verified age information, which is stored securely in accordance with privacy policies.

OpenAI said that the age prediction system is not perfect and may continue to improve over time as it learns which signals are most effective in predicting age. The company has emphasised that it prioritises safety for younger users and that the system will be regularly updated as part of its ongoing commitment to responsible AI deployment.

The introduction of age prediction on ChatGPT underscores the growing focus on ensuring that artificial intelligence systems are accessible and safe for users across different age groups. As AI tools become increasingly integrated into educational, professional, and personal settings, companies are being encouraged by regulators and advocacy organisations to build features that support the well-being of younger users without limiting the utility of the technology for adult users.