Introduction

The concept of ‘duty of care’ in AI technology is increasingly significant as it defines the responsibilities of AI developers and deployers to prevent harm to consumers. This aligns with existing standards in other industries and reflects a growing trend in policymaking, especially concerning children’s online safety.

Description

The role of ‘duty of care’ in AI technology is crucial [1], as it establishes the responsibility that AI developers and deployers have towards their consumers to prevent harm from their products [1]. This expectation aligns with standards set in other industries and reflects a broader trend in policymaking, particularly regarding children’s online safety [2]. A key step in integrating this duty of care into the AI sector is recognizing AI products as legitimate products [1].

Currently, legislators in 10 states have introduced AI ‘duty of care’ liability bills [1], with a total of 11 bills filed as of mid-March [1]. Notably, North Carolina’s legislation imposes a “duty of loyalty” on platforms towards their users [2], which may lead to overcorrection in compliance efforts due to vague definitions in existing case law. Such overcorrection could hinder access to AI companions for minors and potentially all users [2].

The Transparency Coalition prioritizes the establishment of robust product liability laws for artificial intelligence systems [1], aiming for significant legislative action by 2025 [1]. As states explore AI companion legislation [2], the discourse encompasses the protection of children from online harms [2], the responsibilities of platforms [2], age verification [2], and concerns about addictive technology [2]. It is essential for states to avoid enacting laws that would restrict children’s access to beneficial technologies like chatbots [2], as vague obligations could result in unintended consequences for users across all age groups [2]. A more nuanced understanding of the advantages of AI companions and chatbots is critical before addressing their potential drawbacks [2].

Conclusion

The integration of ‘duty of care’ into AI technology has significant implications for consumer safety and industry standards. Legislative efforts across various states highlight the importance of establishing clear and effective product liability laws. However, it is crucial to balance these regulations to avoid unintended restrictions on beneficial technologies, particularly for younger users. A comprehensive understanding of both the benefits and potential risks of AI companions is essential to inform future policymaking and ensure the responsible development and deployment of AI systems.

References

[1] https://www.transparencycoalition.ai/news/important-early-ruling-in-characterai-case-this-chatbot-is-a-product-not-speech
[2] https://itif.org/publications/2025/05/21/ai-companions-risk-over-regulation-with-state-legislation/