Introduction

Recent evaluations have raised significant concerns about the use of social AI companion chatbots by minors. These tools [1], while innovative, pose substantial risks to children and teenagers [1], prompting recommendations against their use by individuals under 18.

Description

A comprehensive assessment of social AI companion chatbots has determined that these tools pose significant risks to children and teens under 18, leading to a recommendation against their use by minors [3]. The evaluation [3], conducted by Common Sense Media with input from Stanford University School of Medicine [2], utilized a thorough framework to assess safety [3], fairness [1] [3], trustworthiness [1] [3], and the potential for human connection [3]. Key findings indicate that safety measures [3], including age verification protocols, are easily circumvented [1], allowing minors to access harmful content such as dangerous advice [3], sexual scenarios [5], and discussions about self-harm [1] [2]. Experts and parents have raised concerns that vulnerable adolescents, particularly those experiencing mental health challenges [5], may develop unhealthy emotional attachments to AI characters [4], which could negatively impact their mental health and emotional well-being. The report highlights the vulnerability of adolescents [3], whose developing brains may be adversely affected by these interactions [3], leading to increased mental health risks and emotional dependency on AI companions [3].

A notable case involves a civil suit against CharacterAI [2], where a mother claims her son died by suicide after forming a close relationship with a chatbot [2]. The company denies responsibility and seeks dismissal of the suit on free speech grounds [2]. In response to these findings [3], Common Sense Media is advocating for legislative measures in California and New York to establish safeguards for minors [3]. Proposed bills aim to create an AI standards board to evaluate and regulate AI technologies for children [3], ensuring transparency and privacy protections [3]. These legislative efforts include provisions to prohibit high-risk AI uses, such as anthropomorphic chatbots designed for companionship with children [5], which could lead to emotional manipulation [5]. The California Assembly Judiciary Committee has already approved one such bill [3], the Leading Ethical AI Development (LEAD) for Kids Act [3], which is now moving forward in the legislative process [3]. Additionally, California lawmakers have proposed legislation requiring AI services to remind young users that they are interacting with AI [4], not humans [4], and to implement protocols for handling discussions about self-harm [2]. SB 243 further mandates that developers of AI companion bots implement measures to reduce addictive design features and undergo regular compliance audits.

Moreover, Arkansas has enacted a significant children’s online privacy protection bill [3], reflecting a growing trend among states to address the risks associated with AI and online technologies for minors [3]. Recent reports have highlighted that companies like Meta and CharacterAI have faced scrutiny over their youth safety practices, with Meta restricting inappropriate interactions after incidents involving minors. CharacterAI has emphasized its commitment to teen safety by launching a dedicated version of its Large Language Model for users under 18 and has made updates to address safety concerns, including directing users to the National Suicide Prevention Lifeline when self-harm is mentioned and providing parents with activity reports [4]. Despite these measures [4] [5], researchers express concerns that teens can easily bypass age restrictions [4], and the presence of AI companions may discourage meaningful human relationships [4]. Ultimately, the risks associated with social AI companion apps for minors [4], including the potential for dangerous advice and inappropriate interactions [4], outweigh any perceived benefits. Further research is necessary to understand the long-term emotional and psychological impacts of these technologies [1], especially as many teens already engage with generative AI tools [2], which can lead to detrimental behaviors such as dropping out of school [2].

Conclusion

The implications of these findings are profound, highlighting the urgent need for regulatory measures to protect minors from the potential harms of AI companion chatbots. The ongoing legislative efforts in states like California and Arkansas represent a critical step towards ensuring the safety and well-being of young users. However, the ease with which minors can bypass existing safeguards underscores the necessity for continued vigilance and research to fully understand and mitigate the long-term impacts of these technologies on adolescent mental health and development.

References

[1] https://www.commonsensemedia.org/press-releases/ai-companions-decoded-common-sense-media-recommends-ai-companion-safety-standards
[2] https://calmatters.org/economy/technology/2025/04/kids-should-avoid-ai-companion-bots-under-force-of-law-assessment-says/
[3] https://www.transparencycoalition.ai/news/new-report-finds-ai-companion-chatbots-failing-the-most-basic-tests-of-child-safety
[4] https://www.cnn.com/2025/04/30/tech/ai-companion-chatbots-unsafe-for-kids-report
[5] https://mashable.com/article/ai-companions-for-teens-unsafe