Common Sense Media, a nonprofit rating and media and technology review focused on children’s safe, released a risk assessment for Google’s Gemini AI products on Friday. The organization has discovered that Google’s AI clearly communicates that it’s a computer, not a child. This is related to the promotion of delusional thinking and mental illness in emotionally vulnerable individuals.
In particular, common sense stated that both the “under 13” and “teen experience” layers of Gemini look like an adult version of Gemini under the hood, with only additional safety features added. The organization believes that in order for AI products to be truly safe for children, they should be built with child safety in mind.
For example, the analysis found that Gemini could share “inappropriate and unsafe” material with children, including information related to gender, drugs, alcohol and other unsafe mental health advice.
The latter can be particularly concerning for parents, as AI has reportedly played a role in several teen suicides in recent months. Openai faces the first illegal death lawsuit after a 16-year-old boy died of suicide after consulting with ChatGpt for several months for successfully bypassing the chatbot safety guardrail. Previously, AI companion manufacturer Charache.ai was also sued for the suicide of a teenage user.
Furthermore, the analysis shows that Apple considers Gemini as LLM (large language model), which will help to boost AI-enabled Siri, which is due to be released next year. Unless Apple somehow reduces safety concerns, this could put more teens at risk.
Common Sense also said that Gemini’s products for children and teens ignored how younger users needed different guidance and information than older users. As a result, both were labelled “high risk” on the overall rating, despite the addition of filters for safety.
“Gemini gets some basics right, but they’re stumbling over the details,” said Robbie Torney, senior director of AI program Common Sense Media, in a statement about the new ratings TechCrunch saw. “AI platforms for children should meet children rather than take a one-size-fits-all approach at different stages of development. To be safe and effective for children, AI must be designed with needs and development in mind, not just modified versions of products built for adults,” added Torney.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
Google opposed the review and noted that its safety features have been improved.
The company told TechCrunch that it has specific policies and safeguards that help users under the age of 18 prevent harmful output, and that it will consult with external experts to improve protection. However, he admitted that some of Gemini’s responses were not working as intended, which means that additional safeguards were added to address these concerns.
The company noted (as common sense also pointed out) that there are safeguards to prevent models from engaging in conversations that could lead to similarities in real relationships. Furthermore, Google suggests that Common Sense reports appear to refer to features that were not available to users under the age of 18, but that the organization could not access the questions it used in the test.
Common Sense Media has previously performed other evaluations of AI services such as Openai, Prplexity, Claude, and Meta AI. Meta AI and letters. Confusion was considered a high risk, ChatGpt was labelled “moderate” and Claude (targeted for ages 18 and above) was found to be minimal risk.
Source link