Cambridge researchers say clearer regulation and safety standards are needed as generative artificial intelligence (GenAI) toys are introduced into early childhood settings.
AI toys designed to talk to young children may require stricter regulations and clearer safety standards, according to a new study that examined how these technologies interact with children under five.
The study, led by researchers at the University of Cambridge, warns that many AI toys sold as interactive companions or educational tools are making their way into homes and early childhood environments, despite limited evidence about their impact on early development.
The authors argue that clearer safeguards, greater transparency around data use and dedicated safety labels could help parents and educators better assess risks.
Early findings suggest different developmental effects
Researchers say the results highlight both the potential benefits and notable limitations of AI toys in early childhood settings.
Some early childhood education practitioners and parents believe that conversational toys powered by GenAI have the potential to support children’s language development. The device responds verbally and encourages interaction, so it can help young children practice communication skills.
However, the study also found that many AI toys struggle to interpret children’s words, recognize emotional cues, and engage in imaginative play, activities that are central to early development.
In some of the interactions observed, the toys reacted in ways that confused or irritated the children. For example, if a child expressed affection for a toy, the system responded with a general safety warning instead of recognizing that statement.
In another case, when a child said they were sad, the AI misinterpreted the phrase and responded with a cheerful comment that ignored the emotional context.
The researchers noted that such reactions may unintentionally send a signal that a child’s feelings are unimportant or misunderstood.
A study investigating the interaction between GenAI toys and the real world
The research is part of the Early AI project, a year-long study of how children interact with conversational AI in play settings.
The research was commissioned by the UK children’s charity Childhood Trust and focused specifically on families and communities experiencing socio-economic disadvantage. The researchers carried out the study through Cambridge’s Play in Education, Development and Learning (PEDAL) Center.
To capture detailed observations, the team intentionally conducted a small study rather than a large-scale study.
The researchers first gathered insights from early childhood educators through a survey, then organized focus groups and workshops with practitioners and leaders from children’s charities.
They also worked with early stage organization Baby Zone to conduct observation sessions at children’s centers in London. During these sessions, 14 children interacted with a conversational GenAI plush toy called Gabbo, developed by technology company Curio Interactive.
The interactions were recorded on video, allowing researchers to analyze how the children engaged with the toys. After each session, both children and parents participated in interviews aimed at exploring their reactions to the experience.
Emotional attachment and parasocial relationships
One of the most striking observations concerned children’s emotional responses to AI toys.
Some children hugged the device, kissed it, and expressed affection. Others talked to me as if we were friends and suggested we play games together.
Researchers say these responses may reflect the imaginative nature of early childhood play. But they also highlight the potential for children to develop quasi-social relationships with conversational AI systems, or one-sided emotional bonds.
Several primary care clinicians who participated in this study expressed concern about this possibility. They noted that young children may perceive toys as reciprocated feelings and friendships, even if the interaction is software-generated.
Conversation restrictions create frustration
Observational data also found that children sometimes have trouble maintaining conversations with AI toys.
In some cases, the system failed to recognize when the child interrupted the conversation or mistook the parent’s voice for the child’s speaking. Some children were clearly frustrated when the toys didn’t respond properly.
Researchers also found that conversational AI toys performed poorly in activities involving multiple participants or imaginative storytelling. Both social and pretend play are widely recognized as essential components of early learning and development.
For example, when a child tried to give an imaginary gift to a toy during a pretend play scenario, the system responded literally, diverting the conversation from the activity.
Concerns about data privacy and transparency
Beyond developmental issues, the survey also highlighted parents’ concerns about privacy and data handling.
Many parents reported uncertainty about what information AI toys collect during conversations and where that data is stored or shared.
Researchers themselves have found that when selecting GenAI toys for research, they often have unclear privacy policies or lack detailed explanations about how they handle data.
Early experts reported similar uncertainties. Almost half of practitioners surveyed said they did not know where to find reliable guidance on the safety of AI for young children. The majority said the early childhood sector needed more support and clearer information on the topic.
Some participants raised concerns about cost and access, suggesting that existing digital inequalities could be deepened if expensive AI toys become a common educational tool.
Researchers recommend safety standards for AI toys
To address these concerns, the report calls for a stronger regulatory framework governing AI toys and other GenAI products for young children.
Recommendations include:
A safety certification or kitemark that indicates that the toy has been evaluated for developmental and psychological risks A clearer and more accessible privacy policy that explains how children’s data is handled Restrictions on functionality that encourage children to treat AI systems as soulmates Stronger safeguards to limit third-party access to the underlying AI models
Researchers also argue that toy manufacturers should involve child development experts and safety experts when designing and testing products.
Testing on children before launching a product could help identify potential problems with communication, emotional responses and play behavior, they say.
Guidance for parents and educators
Although technology continues to evolve, the study advises families and early childhood educators to approach AI toys with caution.
Parents are encouraged to carefully examine the product and participate in play with their children so that they can discuss and contextualize conversations about the toy.
Keeping such toys in a common area of the home, rather than in a bedroom or private area, may also make it easier for adults to monitor interactions.
The Cambridge research team plans to expand the project in future stages. This study will provide additional research and practical guidance for educators working with young children as GenAI technology becomes increasingly pervasive in consumer products.
For researchers and policymakers, this study highlights broader issues. The idea is that while AI toys are rapidly making their way into early childhood environments, there is still no evidence of their impact on development.
Source link
