Thursday, April 16, 2026
Google search engine
HomeTechnologyChatGPT receives security rules to protect teenagers and promote human relationships with...

ChatGPT receives security rules to protect teenagers and promote human relationships with virtual friends

OpenAI has just updated its “Model Spec” – essentially the rulebook for its AI – with a specific set of principles for under-18s (U18s), designed to change the way ChatGPT speaks to teenagers aged 13 to 17. The move is a clear admission that teenagers are not just “mini-adults”; They have different emotional and developmental needs that require stronger guardrails, especially when conversations become intense or risky.

A new framework for adolescent AI interactions

This update lays out exactly how ChatGPT will handle teen users while adhering to the general rules that apply to everyone else. According to OpenAI, it’s about creating an experience that feels safer and more age-appropriate, with a focus on prevention and transparency.

These are not just random rules; The U18 Principles are based on developmental science and have been reviewed by external experts, including the American Psychological Association.

The framework is based on four key promises: putting teens’ safety above all else (even if that makes AI less “helpful” at the moment), pushing teens toward real-world support instead of relying on a chatbot, treating them like real teens rather than little children or full-grown adults, and being honest about the limitations of AI.

These principles outline how ChatGPT will proceed with particular caution when discussing topics such as self-harm, sexual role play, dangerous challenges, substance use, body image issues, or requests to keep secrets about unsafe behavior.

What this means for families and what comes next

This is important because AI is quickly becoming a standard tool for how young people learn and find answers. Without clear boundaries, there is a real risk that teenagers will turn to AI at moments when they actually need a parent, a doctor or a counselor.

OpenAI claims that these new rules ensure that the assistant offers safer alternatives, sets strict boundaries, and prompts the teen to find a trusted adult if a chat veers into dangerous territory. When things look like an immediate emergency, the system is rigged to direct them to crisis hotlines or emergency services.

This offers a little more security for parents. OpenAI links these new principles to its Teen Safety Blueprint and existing parental controls. Protection will also be extended to newer features like group chats, the ChatGPT Atlas browser and the Sora app, along with built-in reminders to take a break so kids aren’t stuck in front of the screen.

Looking ahead, OpenAI is starting to roll out an age prediction tool for ChatGPT personal accounts. This system attempts to guess whether a user is underage and automatically turns on these parental controls.

If it is not safe, the safer U18 experience will default to just in case. The company says this is not a “one and done” solution; They plan to further refine these protections based on new research and feedback, making it clear that keeping teens safe will be a long-term project.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments