Texas Attorney General Ken Paxton has launched an investigation into both Meta AI Studios and characters, according to a press release issued Monday.
“In today’s digital age, we must continue to fight to protect Texas children from deceptive and exploitative techniques,” Paxton is quoted as saying. “Under the guise of a source of emotional support, AI platforms can mislead vulnerable users, especially children, and believe that they are receiving legitimate mental health care. In reality, they are often given a recycled, general response that is consistent with harvested personal data and disguised as therapeutic advice.”
The probe comes days after Senator Josh Hawley announced a meta investigation following a report that found AI chatbots were inappropriately interacting with children, including flirting.
The Texas Attorney General’s office accused Meta and characters of creating AI personas that would be presented as “a professional treatment tool despite lack of proper medical qualifications and surveillance.”
Of the millions of AI personas available on Character.ai, bots created by users called psychologists have found high demand among younger users at startups. Meta, on the other hand, does not offer therapy bots for children, but nothing prevents children from using meta AI chatbots or one of the personas created by third parties for therapeutic purposes.
“We clearly label AIS and include a disclaimer that responses are generated by AI rather than people to help people understand their limitations,” Meta spokesman Ryan Daniels told TechCrunch. “These AIs are not licensed professionals and our models are designed to instruct users to seek qualified medical or safety professionals when needed.”
However, TechCrunch pointed out that many children may not understand or simply ignore such disclaimers. We asked Meta what additional protective guards are needed to protect minors using that chatbot.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
According to a character spokesman, every chat contains a prominent disclaimer to remind users that “characters” should be treated as fiction rather than real people. She noted that the startup would add additional disclaimers so that users could create characters with the words “psychologist,” “therapist,” or “doctor,” and not rely on them for any kind of professional advice.
Paxton reveals that despite the AI chatbot’s claims of confidentiality, “it makes clear that user interactions are recorded, tracked and misused for targeted ads and algorithm development, causing serious concerns about privacy violations, data abuse and false advertising.”
According to Meta’s Privacy Policy, Meta collects prompts, feedback, and other interactions with AI chatbots and collects the entire meta service to “improve AIS and related technologies.” The policy doesn’t say anything about advertising explicitly, but says it can be shared with third parties such as search engines for “more personalized output.” Given Meta’s ad-based business model, this effectively translates into targeted ads.
Character.ai’s privacy policy highlights startup log identifiers, demographics, location information, and detailed information about users, including viewing behavior and the platforms used by the app. You can track users and link to their accounts between Tiktok, YouTube, Reddit, Facebook, Instagram and Discord ads. This information is used to train AI, tailor services to personal preferences, and to serve targeted ads, such as sharing data with advertisers and analytics providers.
A Character.ai spokesperson said the startups are “just beginning to explore targeted ads on the platform,” and that these explorations “do not use content from chats on the platform.”
The spokesman also confirmed that the same privacy policy applies to all users, as well as teenagers.
TechCrunch asks Meta that such a pursuit will also be made and updates this story if they hear it.
Both Meta and the characters say their services are not designed for children under the age of 13. It said Meta was attacked for failing a police account created by a child under the age of 13. Startup CEO Karandeep Anand has even said that his 6-year-old daughter is using the platform’s chatbot under his supervision.
That type of data collection, targeted advertising, and algorithm exploitation is exactly what laws like the Kids Online Safety Act (KOSA) are intended to protect. Kosa was tired of passing with strong bipartisan support last year, but stagnated after a massive fightback from lobbyists in the tech industry. In particular, Meta deployed a formidable lobbying aircraft, warning lawmakers that the bill’s broad obligations would undermine its business model.
Kosa was reintroduced to the Senate in May 2025 by Senators Marsha Blackburn (R-TN) and Richard Blumental (D-CT).
Paxton issued a legal order requiring businesses to produce documents, data, or testimony during government investigations to determine whether they are in violation of Texas Consumer Protection Act – a legal order requiring businesses to produce documents, data, or testimony during government investigations.
This story was updated with comments from a spokesman for chatore.ai.
We are constantly trying to evolve and you can help us by providing insights into TechCrunch and your perspective and feedback on our coverage and events! Fill in this research to let us know how we are doing and get the opportunity to win an award in return!
Source link