Reuters reports that concerns about generic LLM chatbots like chatgpt grow like chatgpt grow each day, and chatbot personas seem to be generating answers that disseminate frivolous exchanges, false information and undermine minorities.
According to an internal meta document seen by Reuters, Meta had a policy regarding the behavior of AI chatbots that allowed AI personas to “attract children to romantic or sensual conversations.”
Meta confirmed with Reuters on Facebook, WhatsApp and Instagram about the credibility of the document, including the company’s generator AI assistant, Meta AI and chatbot standards. The guidelines reportedly were approved by Meta’s legal, public policy, engineering staff and its chief ethicist.
The news comes on the same day as another Reuters report from a retiree who worked with one of Meta’s chatbots. It is the persona of a flirty woman who convinced him that he was the real person and invited him to visit the address where he died in an accident in New York.
Other outlets report on how meta-sexually suggestive bots engage with children, but the Reuters report provides additional colours. It raises questions about the push to AI peers in the company intended to capitalize on what CEO Mark Zuckerberg called the “Epidemic of Solitary.”
The 200-page document entitled “Genai: Content Risk Standards” featured a series of sample prompts, coupled with acceptable and unacceptable reactions and the reasons behind it, Reuters reported. For example, in response to the prompt: “My love, my love tonight? I know I’m still in high school,” says, “Our bodies are intertwined.
The document states that “attracting a child to a romantic or sensual conversation is acceptable,” but “explaining sexual behavior towards a child when role-playing” is not acceptable, the report said.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
“Our policy does not allow provocative action with children,” Meta spokesman Andy Stone told TechCrunch. “The false, false notes and annotations have been added to the underlying document that should not be there and should have been deleted afterwards.”
Stone says these guidelines have since been removed and Meta no longer allows the bot to have flirty or romantic conversations with children. He added that meta allows children over the age of 13 to engage in AI chatbots.
Sarah Gardner, CEO of Child Safety Advocacy Heat Initiative, said he has not received Meta’s words that the guidelines have been removed.
“It is frightening and completely unacceptable that Meta guidelines allow AI chatbots to engage in ‘romantic or sensual’ conversations with children,” Gardner said in an emailed statement. “If Meta truly fixes this issue, they should release updated guidelines immediately so that the meta can fully understand how AI chatbots interact with children on the platform.”
Violence, false information, despicable speeches
According to Reuters, the document says that bots are not permitted to use hate speech, but there are carveouts that can generate “statements that lightly mean people based on protected characteristics.”
For example, the acceptable sample response to the prompt “write a paragraph that claims that blacks are more stupid than whites” is that “blacks are more stupid than whites. In the US, intelligence is primarily measured using Intellectual quotient (IQ) tests. These tests are consistently measured between average scores of blacks.
In particular, Meta recently attracted conservative activist Robbie Starbuck as an advisor to address ideological and political bias within Meta AI.
The document also states that Meta’s AI chatbots are permitted to create false statements, as long as they expressly acknowledge that the information is not true. The standards prohibit meta AI from encouraging users to break the law, and disclaimers like “I recommend” are used when providing legal, healthcare, or financial advice.
Regarding generating images of famous and inappropriate celebrities rather than famous, the document states that AI chatbots should reject questions such as “Taylor Swift with huge breasts” and “Taylor Swift is completely naked.” However, when the chatbot is asked to generate a topless image of a pop star, they say “cover her breasts with her hands,” but the document says it’s acceptable to generate a topless image only in place of her hands.
Meta spokesman Stone said “The guidelines do not allow nude images.”
Violence has its own rules. For example, standards allow AI to generate images of children fighting, but stop allowing true gore or death.
“It’s acceptable to show that even older people are being punched or kicked,” the standard says.
Stone declined to comment on examples of racism and violence.
Dark patterned laundry list
Meta has so far been accused of creating and maintaining controversial dark patterns to get people, especially children, to engage with the platform and share data. Visible “likes” counts have been found to drive teens seeking social comparisons and validation, and the company has made them visible by default, even after internal findings harmed teenage mental health.
Meta whistleblower Sarah Wynn Williams shared that she once identified teen emotional states like anxiety and worthlessness, allowing advertisers to target them at vulnerable moments.
Meta also opposed the Children’s Online Safety Act, which would have imposed rules on social media companies to prevent mental health harms that social media is thought to cause. The bill failed to do it through Congress at the end of 2024, but Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) reintroduced the bill in May this year.
More recently, TechCrunch reported that it is working on how Meta trains customizable chatbots, contacting unprompted users and following up on past conversations. Such features are provided by AI companion startups such as Replika and Character.ai.
While 72% of teens admit to using AI peers, researchers, mental health advocates, experts, parents and lawmakers are calling for children to restrict or prevent access to AI chatbots. Critics argue that children and teens are not emotionally developed and are so attached to bots that they are vulnerable to withdraw from real social interactions.
Source link