The US Department of Justice (DOJ) is intensifying its focus on the oversight of artificial intelligence (AI) usage by companies, emphasizing the importance of robust compliance programs [2]. This initiative aims to ensure that AI technologies are used responsibly and in accordance with legal standards, while mitigating associated risks [1].

Description

Prosecutors are prioritizing the oversight of AI usage by companies [2], emphasizing the necessity for robust compliance programs that effectively monitor this technology [2]. In September 2024 [1] [3], the US Department of Justice (DOJ) updated its Compliance Guidance for 2024 [4], highlighting the risks associated with AI and emerging technologies. This update aligns with the DOJ’s new initiative, Justice AI [1] [4], announced in February, which aims to engage experts to assess the impact of AI on the DOJ’s mission and promote its beneficial use while mitigating associated risks [1]. Companies are urged to thoroughly assess their compliance policies and controls in light of these updates [3], ensuring that their controls align with legal standards and trustworthiness [4]. The DOJ is increasingly scrutinizing corporate compliance programs during criminal investigations and is committed to using existing legal frameworks to address AI-related misconduct.

These compliance programs must be well-resourced, ensuring access to relevant data and the same technological tools utilized by the business [2]. It is crucial that compliance initiatives are not isolated from the data generated by the company and employ advanced tools [2], including data analytics, to evaluate their effectiveness [2]. The updated guidance emphasizes the need for companies to evaluate how they manage risks related to AI [3], including the potential for misuse, such as AI-generated false approvals or documentation [3]. Additionally, companies are advised to disclose material risks associated with their AI technology in filings and reports [1], while also monitoring SEC guidance on such disclosures [1].

State Attorneys General from various states, including Texas, Massachusetts [1], and California [1], have cautioned companies using AI to ensure compliance with existing laws [1]. US authorities are scrutinizing corporate claims about AI technology [1], urging companies to avoid misleading representations and to ensure that their statements accurately reflect their actual capabilities [1]. The Federal Trade Commission (FTC) has provided guidance to ensure that AI-related claims are substantiated and has reiterated that existing laws apply to AI. Companies should approach AI implementation with the same diligence as other business areas [1], including testing [1], oversight [1] [2] [4], and disclosures [1].

In conducting risk assessments and developing best practices [2], a comprehensive exploration of various issues pertinent to their operational areas is essential [2]. Companies are encouraged to draw lessons from both their own past misconduct and the compliance challenges encountered by others in their industry or region [2]. Establishing internal policies for AI use [1], educating boards on disclosure requirements [1], and staying informed on agency guidance are also recommended practices [1]. Employee training must be interactive [4], tailored [3] [4], and measurable to ensure effectiveness [4], and the DOJ highlights the significance of whistleblower protections [4], advocating for mechanisms that track employee comfort in reporting issues and safeguarding against retaliation [4]. Companies are also urged to maintain ongoing oversight of third-party relationships beyond initial onboarding [4], implement robust compliance audit processes [4], and enhance integration strategies following mergers [4].

The DOJ’s initiatives reflect a broader trend towards higher standards for corporate compliance programs [3], advocating for a proactive approach tailored to a company’s unique risk profile [3]. As these changes are implemented, companies should ensure that their compliance frameworks are robust enough to mitigate potential risks associated with AI and emerging technologies.

Conclusion

The DOJ’s enhanced focus on AI compliance underscores the growing importance of responsible AI usage in the corporate sector. By aligning compliance programs with updated legal standards and leveraging advanced tools, companies can better manage AI-related risks. This proactive approach not only safeguards against potential misconduct but also fosters trust and transparency in AI applications, ultimately contributing to a more secure and ethical technological landscape.

References

[1] https://wp.nyu.edu/compliance_enforcement/2024/10/11/ftc-announces-new-enforcement-initiative-targeting-deceptive-ai-practices/
[2] https://www.jdsupra.com/legalnews/doj-updates-criteria-for-review-of-9596992/
[3] https://www.jdsupra.com/legalnews/doj-updates-corporate-compliance-8002719/
[4] https://www.jdsupra.com/legalnews/episode-340-doj-updates-evaluation-of-c-14329/