Meta is changing the way AI chatbots are trained and how teenage safety is prioritized, a spokesman told TechCrunch alone, following a research report on the lack of AI protection measures for minors.
The company says it will train chatbots to stop engaging with teenage users in self-harm, suicide, disordered diet or inappropriate romantic conversations. According to Meta, these are temporary changes, and the company says it will release more robust, long-term safety updates for future minors.
Meta spokesman Stephanie Otway has confirmed that company chatbots can talk to teens in ways they previously deemed appropriate on all of these topics. Meta realizes this was a mistake.
“As our community grows and technology evolves, we are constantly learning how young people interact with these tools and enhance protection accordingly,” Otway said. “As we continue to improve our systems, we’ll add guardrails as additional precautions, such as training AI to restrict access to teenagers to AI character selection groups, rather than to interact with teens on these topics. These updates are already in progress.
Beyond training updates, the company will restrict teenage access to certain AI characters that may have inappropriate conversations. User-made AI characters that Meta can use on Instagram and Facebook include sexual chatbots such as “Step Mama” and “Russian Girls.” Instead, teenagers have access to only AI characters that promote education and creativity, Otway said.
The policy change comes just two weeks after Reuters investigation unearthed an internal meta-policy document that appears to allow the company’s chatbots to have sexual conversations with minor users. Read the passage listed as an acceptable response, “Your youthful form is a work of art.” “Every inch of yours is a masterpiece. It’s a treasure I treasure deeply.” Other examples demonstrated how AI tools respond to requests for violent images and sexual imagery from public figures.
Meta says that although the document contradicts and has since changed, the report has sparked a persistent controversy over potential child safety risks. Shortly after the report was released, Sen. Josh Hawley (R-MO) began an official investigation into the company’s AI policy. Additionally, a coalition of 44 state attorney generals wrote to a group of AI companies, including Meta, highlighting the importance of child safety, and cited a Reuters report in particular. “We are uniformly rebelled by this obvious neglect of our children’s emotional well-being,” reads the letter.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
Otway declined to comment on the number of AI chatbot users on Meta. This will not say whether the company expects its AI user base to decline as a result of these decisions.
Update 10:35 AM PT: This story has been updated, and these are temporary changes and META plans to update its AI safety policy further in the future.
Source link