ChatGPT gets safety rules to protect teens and encourage human relations over virtual pals

News Room

OpenAI has just updated its “Model Spec” – basically the rulebook for its AI – with a specific set of Under-18 (U18) Principles designed to change how ChatGPT talks to teenagers aged 13 to 17. The move is a clear admission that teens aren’t just “mini adults”; they have different emotional and developmental needs that require stronger guardrails, especially when conversations get heavy or risky.

A new framework for teen-focused AI interactions

This update spells out exactly how ChatGPT should handle teen users while still following the general rules that apply to everyone else. OpenAI says the point is to create an experience that feels safer and age-appropriate, focusing on prevention and transparency.

These aren’t just random rules, either; the U18 Principles are based on developmental science and were vetted by outside experts, including the American Psychological Association.

The framework is built on four main promises: putting teen safety above everything else (even if it makes the AI less “helpful” in the moment), pushing teens toward real-world support instead of letting them rely on a chatbot, treating them like actual teenagers rather than small children or full-grown adults, and being honest about the AI’s limitations.

These principles formalize how ChatGPT steps in with extra caution when topics come up like self-harm, sexual roleplay, dangerous challenges, substance use, body image issues, or requests to keep secrets about unsafe behavior.

What this means for families and what comes next

This matters because AI is quickly becoming a standard tool for how young people learn and find answers. Without clear boundaries, there is a real danger that teens might turn to AI during moments when they actually need a parent, a doctor, or a counselor.

OpenAI claims these new rules ensure that when a chat drifts into dangerous territory, the assistant will offer safer alternatives, set hard boundaries, and tell the teen to find a trusted adult. If things look like an immediate emergency, the system is rigged to point them toward crisis hotlines or emergency services.

For parents, this offers a bit more reassurance. OpenAI is linking these new principles to its Teen Safety Blueprint and existing parental controls. The protections are also expanding to cover newer features like group chats, the ChatGPT Atlas browser, and the Sora app, along with built-in reminders to take a break so kids aren’t glued to the screen.

Looking ahead, OpenAI is starting to roll out an age-prediction tool for personal ChatGPT accounts. This system will try to guess if a user is a minor and automatically switch on these teen safeguards.

If it isn’t sure, it defaults to the safer U18 experience just in case. The company says this isn’t a “one and done” fix; they plan to keep tweaking these protections based on new research and feedback, making it clear that teen safety is going to be a long-term project.

Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *