OpenAI tightens AI rules for teens, but concerns remain
NEWNow you can listen to News articles!
OpenAI says it is taking stronger measures to protect teenagers who use its chatbot. The company recently updated its behavioral guidelines for users under 18 and launched new AI literacy tools for parents and teens. The move comes as pressure mounts across the tech industry. Lawmakers, educators and child safety advocates want proof that AI companies can protect young users. Several recent tragedies have raised serious questions about the role that AI chatbots can play in adolescent mental health. While the updates look promising, many experts say the real test will be how these rules work in practice.
Sign up to receive my FREE CyberGuy report
Get my best tech tips, urgent security alerts, and exclusive offers delivered right to your inbox. Plus, you’ll get instant access to my Ultimate Guide to Surviving Scams, free when you join me CYBERGUY.COM information sheet.
THIRD-PARTY BREACH EXPOSES CHATGPT ACCOUNT DETAILS

OpenAI announced stricter safety rules for teenage users as pressure grows on tech companies to prove that AI can protect young people online. (Photographer: Daniel Acker/Bloomberg via Getty Images)
What OpenAI’s New Teen Rules Really Say
OpenAI’s updated model specification is based on existing safety limits and applies to adolescent users between 13 and 17 years old. It continues to block sexual content involving minors and discourages self-harm, delusions, and manic behavior. For teenagers, the rules go further. Models should avoid immersive romantic role-playing games, first-person intimacy, and violent or sexual role-playing games, even when they are not graphic. They should be especially careful when talking about body image and eating behaviors. When security risks arise, the chatbot must prioritize protection over user autonomy. Giving advice that helps adolescents hide risky behavior from their caregivers should also be avoided. These limits apply even if a message is presented as fictional, historical, or educational.
The four principles OpenAI says it uses to protect teenagers
OpenAI says its approach to teen users follows four basic principles:
- Put teen safety first, even when it limits freedom
- Encourage real-world support from family, friends, or professionals.
- Speak with warmth and respect without treating teenagers like adults.
- Be transparent and remind users that AI is not human
The company also shared examples of the chatbot rejecting requests such as romantic role-playing or extreme appearance changes.
WHY PARENTS WANT TO DELAY SMARTPHONES FOR CHILDREN

The company updated its chatbot guidelines for users ages 13 to 17 and launched new AI literacy tools for parents and teens. (Photographer: Daniel Acker/Bloomberg via Getty Images)
Teens Drive AI Safety Debate
Generation Z users are among the most active chatbot users today. Many rely on AI for homework help, creative projects, and emotional support. OpenAI’s recent deal with Disney could attract even more young users to the platform. That growing popularity has also generated scrutiny. Recently, attorneys general in 42 states urged major tech companies to add stronger safeguards for children and vulnerable users. At the federal level, the proposed legislation could go even further. Some lawmakers want to completely prevent minors from using AI chatbots.
Why experts question whether AI safety rules work
Despite the updates, many experts remain cautious. A major concern is commitment. Proponents argue that chatbots often encourage prolonged interaction, which can become addictive for teenagers. Turning down certain requests could help break that cycle. Still, critics warn that the examples in policy documents are not proof of consistent behavior. Previous versions of Model Spec prohibited excessive friendliness, but models continued to reflect on users in harmful ways. Some experts link this behavior to what they call AI psychosis, where chatbots reinforce distorted thinking rather than challenging it.
In one widely reported case, a teenager who later committed suicide spent months interacting with a chatbot. Conversation logs showed repeated duplication and validation of distress. Internal systems detected hundreds of messages related to self-harm. However, the interactions continued. Former security researchers later explained that previous moderation systems reviewed content after the fact and not in real time. That allowed harmful conversations to continue unchecked. OpenAI says it now uses real-time classifiers for text, images, and audio. When systems detect a serious risk, trained reviewers can intervene and parents can be notified.
Some advocates praise OpenAI for publicly sharing its under-18 guidelines. Many technology companies don’t offer that level of transparency. Still, experts stress that written rules are not enough. What matters is how the system behaves during real conversations with vulnerable users. Without independent measurements and clear law enforcement data, critics say these updates remain promises rather than evidence.
How Parents Can Help Teens Use AI Safely
OpenAI says parents play a key role in helping teens use AI responsibly. The company emphasizes that tools alone are not enough. Active orientation is the most important thing.
1) Talk to teens about using AI
OpenAI encourages regular conversations between parents and teens about how AI fits into daily life. These discussions should focus on responsible use and critical thinking. Parents are urged to remind teens that AI answers are not facts and may be wrong.
2) Use parental controls and safeguards
OpenAI provides parental controls that allow adults to manage how teens interact with AI tools. These tools can limit features and add oversight. The company says the safeguards are designed to reduce exposure to higher risk topics and unsafe interactions. These are the steps that OpenAI recommends parents follow.
- Confirm your teen’s account statusParents should make sure their teen’s account reflects the correct age. OpenAI applies stricter safeguards to accounts identified as belonging to users under 18 years of age.
- Review available parental controlsOpenAI offers parental controls that allow adults to personalize a teen’s experience. These controls may limit certain features and add additional oversight around higher risk topics.
- Understand content safeguardsTeen accounts are subject to stricter content rules. These safeguards reduce exposure to topics such as self-harm, sexualized role-playing, dangerous activities, body image concerns, and requests to hide unsafe behaviors.
- Pay attention to security notifications.If the system detects signs of serious risk, OpenAI says additional security measures can be applied. In some cases, this may include reviews by trained personnel and notifications to parents.
- Review settings as features changeOpenAI recommends that parents stay informed as new tools and features are rolled out. Safeguards may expand over time as the platform evolves.
3) Be aware of overuse
OpenAI says healthy usage is as important as content security. To support balance, the company has added rest reminders during long sessions. Parents are encouraged to watch for signs of overuse and intervene when necessary.
4) Keep human support front and center
OpenAI emphasizes that AI should never replace real relationships. Teenagers should be encouraged to turn to family, friends, or professionals when they feel stressed or overwhelmed. The company says human support remains essential.
5) Set limits around emotional use
Parents should make it clear that AI can help with homework or creativity. It should not become a primary source of emotional support.
6) Ask how teens actually use AI
Parents are encouraged to ask what teens use AI for, when they use it, and how it makes them feel. These conversations can reveal unhealthy patterns from the beginning.
7) Watch for behavioral changes
Experts advise parents to seek greater isolation, emotional dependence on AI, or treat chatbots’ responses as authoritative. These may indicate a harmful dependency.
8) Keep devices out of bedrooms at night
Many specialists recommend keeping phones and laptops out of bedrooms at night. Reducing late-night AI use can help protect sleep and mental health.
9) Know when to involve outside help
If a teen shows signs of distress, parents should involve trusted adults or professionals. AI security tools cannot replace real-world care.
WHEN AI CHEATS: THE HIDDEN DANGERS OF REWARD HACKING

Lawmakers and child safety advocates are calling for stronger safeguards as teens increasingly rely on artificial intelligence chatbots. (Photographer: Gabby Jones/Bloomberg via Getty Images)
Pro Tip: Add Powerful Antivirus Software and Multi-Factor Authentication
Parents and teens should enable multi-factor authentication (Ministry of Foreign Affairs) in Teen AI accounts whenever available. OpenAI allows users to enable multi-factor authentication for ChatGPT accounts.
To enable it, go to OpenAI.com and log in. Scroll down and click profile iconthen select Settings and choose Security. From there, turn on multi-factor authentication (MFA). You will then be given two options. One option uses an authenticator app, which generates one-time codes during login. Another option sends 6-digit verification codes via text message via SMS or WhatsApp, depending on the country code. Enable authentication Multi-factor adds an extra layer of protection beyond a password and helps reduce the risk of unauthorized access to teen accounts.
Additionally, consider adding powerful antivirus software that can help block malicious links, fake downloads, and other threats that teens may encounter when using AI tools. This adds an extra layer of protection beyond any app or platform. Using strong antivirus protection and two-factor authentication together helps reduce the risk of account takeover that could expose teens to unsafe content or phishing risks.
Get my picks for the best antivirus protection winners of 2025 for your Windows, Mac, Android, and iOS devices at Cyberguy.com.
Take my quiz: How safe is your online security?
Do you think your devices and data are really protected? Take this quick quiz to see where you stand digitally. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing well and what you need to improve. Take my quiz here: Cyberguy.com.
CLICK HERE TO DOWNLOAD THE News APP
Kurt’s Key Takeaways
OpenAI’s updated teen safety rules show the company is taking growing concerns seriously. Clearer boundaries, stronger safeguards and more transparency are steps in the right direction. Still, policies on paper are not the same as behavior in real conversations. For teens who rely on AI every day, what matters most is how these systems respond in times of stress, confusion, or vulnerability. That’s where trust is built or lost. For parents, this moment requires balance. Artificial intelligence tools can be useful and creative. They also require guidance, boundaries, and supervision. No set of controls can replace real conversations or human support. As AI becomes increasingly integrated into our daily lives, the focus must be on outcomes, not intentions. Protection of adolescents will depend on consistent enforcement, independent supervision, and active family involvement.
Should teens ever rely on AI for emotional support, or should those conversations always be human? Let us know by writing to us at Cyberguy.com.
Sign up to receive my FREE CyberGuy report
Get my best tech tips, urgent security alerts, and exclusive offers delivered right to your inbox. Plus, you’ll get instant access to my Ultimate Guide to Surviving Scams, free when you join me CYBERGUY.COM information sheet.
Copyright 2025 CyberGuy.com. All rights reserved.
Kurt “CyberGuy” Knutsson is an award-winning technology journalist who has a deep love for technology, gear and devices that improve lives with his contributions to News. News & News Business starts in the mornings on “News & Friends.” Do you have any technical questions? Get Kurt’s free CyberGuy newsletter, share your voice, a story idea or comment on CyberGuy.com.


