Introduction
The Oregon Attorney General’s Office [1] [3], under the leadership of Attorney General Ellen Rosenblum, has issued comprehensive guidance on the application of existing state laws to businesses employing artificial intelligence (AI) technologies. This guidance clarifies the legal landscape for AI use in Oregon, highlighting the impact of current laws on AI implementation, particularly in sectors like healthcare and small businesses. It addresses both the transformative potential of AI and the associated risks, such as privacy concerns and discrimination [4].
Description
The guidance emphasizes that while Oregon has not enacted specific AI legislation, several existing laws significantly influence AI use. It acknowledges AI’s potential to enhance the state’s economy by streamlining tasks, offering personalized services [4], and enabling data-driven decision-making [4]. However, it also highlights risks, including consumer privacy issues, discrimination [1] [2] [3] [4], and accountability challenges [4], especially concerning the use of large amounts of personal data, which raises concerns about data breaches and unauthorized use [4].
Key regulatory frameworks relevant to AI development and use in Oregon are outlined. The guidance prohibits misrepresentations in consumer transactions involving AI products or services [4], holding companies accountable for providing accurate information [4]. Businesses must disclose their use of personal data in AI training, obtain explicit consent for processing sensitive information [1] [3], and allow consumers to opt out of AI-driven profiling for significant decisions [4]. Data Protection Assessments are required for high-risk AI activities [4], and retroactive changes to privacy notices are prohibited [4].
Companies are also required to implement appropriate safeguards for personal data used in AI systems, including notifying consumers and the Attorney General in the event of security breaches [4], with violations enforceable under the Unlawful Trade Practices Act (UTPA) [4]. The guidance warns against biased outcomes produced by AI systems based on protected characteristics [4], particularly in automated decision-making contexts such as housing and lending [4], where historical biases in training data may lead to discrimination [4].
In addition to existing regulations, Oregon lawmakers have enacted Senate Bill 1571 [1] [2], mandating that political campaigns disclose the use of AI to manipulate audio or video [1] [2], including deepfakes [1] [2], for voter influence [1] [2]. This legislation underscores the necessity for transparency in AI applications within political contexts [2].
Businesses may violate Oregon’s comprehensive privacy law if they fail to disclose their use of personal information in conjunction with AI tools [3]. They are required to provide clear privacy notices regarding personal data usage and obtain explicit consent before processing sensitive information [1], which includes data input into AI systems [3]. Consumers must also be given the option to withdraw consent and opt out of AI profiling for significant decisions [3]. A lack of transparency regarding potential material defects in AI tools [3], such as inaccuracies from third-party virtual assistants [3], may be interpreted by the Attorney General as a breach of the Unlawful Trade Practices Act.
The guidance identifies specific AI applications that could contravene the Unlawful Trade Practices Act [3], including failing to inform users that they are interacting with an AI tool [3], misrepresenting the capabilities of the AI [3], or disseminating false information [1]. Additionally, using AI-generated voices for robocalls without proper disclosure of the caller’s identity is highlighted as a potential violation [3]. Companies that engage in unethical practices, such as price gouging or creating false urgency, may also face liability under multiple laws [1].
Discriminatory use of AI based on race [3], gender [3], or other protected characteristics would breach Oregon’s Equality Act [3], which prohibits discrimination based on identity characteristics and raises concerns about potential biases in AI outputs [1]. Companies are reminded of their responsibilities under the state’s data security law [3], which mandates the implementation of reasonable safeguards to protect personal information when utilizing AI technologies [3]. Furthermore, under the Oregon Consumer Information Protection Act [1], businesses are obligated to notify consumers of data breaches and ensure compliance with evolving legal standards related to AI and emerging technologies. Strategies for mitigating legal risks associated with AI usage are also discussed [2], emphasizing the importance of adhering to these legal obligations to ensure the security of data processed through AI systems.
Conclusion
The guidance from the Oregon Attorney General’s Office serves as a crucial resource for businesses navigating the complexities of AI technology within the state. By clarifying the application of existing laws and introducing new legislative measures, Oregon aims to balance the benefits of AI with the protection of consumer rights and privacy. This approach not only fosters innovation but also ensures accountability and ethical standards in AI deployment, ultimately contributing to a more secure and equitable technological landscape.
References
[1] https://www.constangy.com/constangy-cyber-advisor/oregon-attorney-general-issues-ai-guidance-for-businesses
[2] https://www.jdsupra.com/legalnews/oregon-attorney-general-issues-ai-4724632/
[3] https://www.jdsupra.com/legalnews/oregon-s-ai-guidance-old-laws-in-scope-9266463/
[4] https://gdprlocal.com/ai-and-oregon-businesses-understanding-the-legal-landscape/