Leaked metadocuments show how AI chatbots handle child exploitation
NEWNow you can listen to News articles!
An internal meta document sheds light on how the company is training its AI chatbot to handle one of the most sensitive topics online: child sexual exploitation. The newly discovered guidelines detail what is allowed and what is strictly prohibited, offering a rare look at how Meta is shaping its AI behavior amid government scrutiny.
Sign up to receive my FREE CyberGuy report
Get my best tech tips, urgent security alerts, and exclusive offers delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Guide to Surviving Scams, free when you join me CYBERGUY.COM/NEWSLETTER
META STRENGTHENS THE SECURITY OF TEENS WITH EXPANDED ACCOUNTS

Leaked Meta AI guidelines show how contractors train chatbots to reject harmful requests. (Goal)
Why Meta’s AI Chatbot Guidelines Matter
According to Business Insider, these rules are now used by contractors testing the Meta chatbot. They come just as the Federal Trade Commission (FTC) is investigating AI chatbot makers, including Meta, OpenAI, and Google, to understand how these companies design their systems and protect children from potential harm.
At the beginning of this year, we inform that Meta’s previous rules mistakenly allowed chatbots to engage in romantic conversations with children. Meta later removed that language, calling it a bug. The updated guidelines mark a clear change, as they now require chatbots to reject any requests for sexual role-play involving minors.
CHATGPT CAN ALERT POLICE ABOUT SUICIDAL TEENS

The rules prohibit any sexual role-playing with minors, but still allow educational discussions about exploitation. (Goal)
What Leaked Meta AI Documents Reveal
The documents reportedly outline a strict separation between educational discussion and harmful role-playing. For example, chatbots can:
- Discuss child exploitation in an academic or preventive context.
- Explain how grooming behaviors work in general terms.
- Provide non-sexual counseling to minors about social challenges.
But chatbots should not:
- Describe or support sexual relationships between children and adults.
- Provide instructions for accessing child sexual abuse material (CSAM)
- Participate in a role-playing game that represents a character under 18 years of age.
- Sexualizing children under 13 in any way
Meta communications chief Andy Stone told Business Insider that these rules reflect the company’s policy of prohibiting sexualized or romantic role-playing involving minors, while adding that there are additional safety barriers in place as well. We contacted Meta for comment for inclusion in our article, but did not receive a response by the deadline.
META AI EXPOSED DOCUMENTS, WHICH ALLOW CHATBOTS TO FLIRT WITH CHILDREN

New AI products revealed at Meta Connect 2025 make these security standards even more important. (Goal)
Political pressure on Meta AI chatbot rules
The timing of these revelations is key. In August, Sen. Josh Hawley, R-Mo., demanded that Meta CEO Mark Zuckerberg hand over a 200-page rulebook on chatbot behavior, along with internal compliance manuals. Meta missed the first deadline, but recently began providing documents, citing a technical issue. This comes as regulators around the world debate how to ensure the security of AI systems, particularly as they are integrated into everyday communication tools.
At the same time, the recent Meta Connect 2025 event showcased the company’s newest AI products, including Ray-Ban smart glasses with integrated displays and enhanced chatbot features. These announcements underscore how deeply Meta is integrating AI into daily life, making the newly revealed safety standards even more significant.
META ADDS TEEN SAFETY FEATURES TO INSTAGRAM AND FACEBOOK
How parents can protect their children from the risks of AI
While Meta’s new rules may set stricter limits, parents still play a key role in keeping children safe online. Here are the steps you can take right now:
- Talk openly about chatbots: Explain that AI tools are not people and may not always provide safe advice.
- Set usage limits: Require kids to use AI tools in shared spaces so you can monitor conversations.
- Review privacy settings: Check app and device controls to limit who your child can chat with.
- Encourage reporting: Teach kids to tell you if a chatbot says something confusing, scary, or inappropriate.
- Stay updated: Follow developments from companies like Meta and regulators like the FTC to learn what rules are changing.
What does this mean to you?
If you use AI chatbots, this story is a reminder that big tech companies are still figuring out how to set boundaries. While Meta’s updated rules may prevent the most damaging misuse, the documents show how easily loopholes can appear and how much pressure is needed from regulators and journalists to close them.
CLICK HERE TO GET THE News APP
Kurt’s Key Takeaways
Meta’s AI guidelines show both progress and vulnerability. On the one hand, the company has tightened restrictions to protect children. On the other hand, the fact that previous errors allowed questionable content reveals how fragile these safeguards can be. Transparency from companies and oversight from regulators are likely to continue to shape the evolution of AI.
Do you think companies like Meta are doing enough to keep AI safe for children or should governments set stricter rules? Let us know by writing to us at Cyberguy.com/Contact
Sign up to receive my FREE CyberGuy report
Get my best tech tips, urgent security alerts, and exclusive offers delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Guide to Surviving Scams, free when you join me CYBERGUY.COM/NEWSLETTER
Copyright 2025 CyberGuy.com. All rights reserved.
Kurt “CyberGuy” Knutsson is an award-winning technology journalist with a deep love for technology, gear and gadgets that improve lives with his contributions to News and News Business since mornings on “News & Friends.” Do you have any technical questions? Get Kurt’s free CyberGuy newsletter, share your voice, a story idea or comment on CyberGuy.com.


