A new report from the US Public Interest Research Group (PIRG) Education Fund has raised concerns about the increasing use of artificial intelligence chatbots in children’s toys, warning that some of these systems may not be suitable for young users. According to the report, several AI-powered toys integrate chatbot technology that can generate responses similar to those used in adult AI services, potentially exposing children to inappropriate or misleading content.
The study examined a range of toys that incorporate conversational AI features, including interactive dolls, robots and learning devices. Many of these products allow children to talk to a toy that responds in natural language and is based on large language models similar to those used in widely used AI chatbots.
While technology can make toys more interactive and educational, PIRG researchers argue that the safeguards built into some products may not be strong enough to protect younger audiences. In particular, the report highlights that the underlying AI systems often come from platforms designed primarily for general users rather than children.
Because of this, the AI responses generated by these toys could potentially contain information or conversation topics more suitable for adults than children. The report also warns that AI may provide inaccurate or unpredictable answers, which could confuse young users who tend to trust toys as a reliable source of information.
Researchers who reviewed the toys’ documentation and privacy policies also found that some products rely heavily on cloud-based AI systems
This means that children’s voice interactions can be transmitted to external servers, where the data is processed and used to generate responses. Privacy advocates say this raises additional concerns about the storage and use of children’s data. Some toys may collect audio recordings, user prompts or other personal information during conversations. If these systems are not carefully designed to protect children’s privacy, the data could potentially be misused or stored without clear safeguards.
The report also notes that many AI-powered toys contain disclaimers in their terms of use or product documentation. These disclaimers sometimes state that AI responses may not always be accurate or appropriate, effectively shifting responsibility to parents while marketing the toy itself directly to children.
This situation is significant as AI technology increasingly penetrates everyday consumer goods, including items designed specifically for young audiences. Toys that simulate conversations can have a powerful influence on children, who often view them as companions or learning tools.
Experts say children may have difficulty distinguishing between reliable information and AI-generated answers that are speculative, biased or incorrect. As AI systems continue to evolve, it becomes increasingly important to ensure that these technologies are adapted to keep children safe.
The results also highlight a broader regulatory challenge
While many countries have laws protecting children’s online privacy, such as the Children’s Online Privacy Protection Act (COPPA) in the United States, these regulations were developed before the advent of generative AI.
Advocacy groups argue that regulators may need to update safety standards and guidelines to clarify how AI systems interact with children through connected devices.
The PIRG report calls on toy manufacturers to adopt stricter safeguards, including stricter content filtering, clearer disclosure of AI usage and more transparent data practices. It also recommends companies develop AI systems specifically for children rather than repurposing models originally designed for an adult audience.
Looking forward, researchers say collaboration between tech companies, regulators and child safety experts will be necessary to ensure AI-controlled toys remain both innovative and safe.
As artificial intelligence becomes more integrated into everyday products, the challenge is balancing the benefits of interactive technology with the responsibility of protecting younger users from potential risks.




