
Algorithmic Transparency in Public Sector: Updated Report Released
The Global Partnership on Artificial Intelligence (GPAI) has published an updated report on algorithmic transparency instruments in the public sector, emphasizing the importance of public algorithm repositories for enhancing accountability and trust in AI governance.
View full story…

Navigating the Complex Regulatory Landscape of AI: Key Considerations for Legal Counsel
As jurisdictions worldwide, including the EU and various US states, implement new AI regulations and scrutinize M&A activities, companies must navigate compliance challenges and potential antitrust concerns while engaging in strategic partnerships and cross-border transactions.
View full story…

Bridging the AI Governance Talent Gap in Legal Departments
The rapid advancement of generative AI has exposed a significant talent gap in legal departments, necessitating proactive engagement in AI governance to mitigate regulatory, ethical, and operational risks, as highlighted by the Moffatt v. Air Canada case and the need for alignment with frameworks like the EU AI Act.
View full story…

Navigating AI Regulations in Healthcare: Challenges and Opportunities
The rapid integration of AI technologies in healthcare is creating a complex regulatory landscape in the U.S., where state and federal laws must adapt to ensure transparency, accountability, and fairness while addressing liability concerns and algorithmic bias.
View full story…

Matthew F. Ferraro Joins Crowell & Moring as Partner in Privacy and Cybersecurity Group
Matthew F. Ferraro has joined Crowell & Moring, bringing over a decade of experience in cybersecurity and emerging technology, focusing on regulatory matters related to artificial intelligence, data privacy, and national security, while emphasizing the need for legal professionals to stay informed about evolving regulations and technological risks.
View full story…
Latest AI legal news

Governance Challenges of AI Integration in Microsoft 365 Environments
Organizations must establish robust governance models and data lifecycle management policies to mitigate risks associated with AI technologies like Copilot, ensuring compliance with international data protection laws while addressing the regulatory and operational challenges faced by legal departments.
View full story…

Meta Wins Copyright Lawsuit Over AI Training, Affirming Fair Use Doctrine
A US District Court ruled in favor of Meta in a copyright lawsuit brought by authors including Ta-Nehisi Coates and Sarah Silverman, determining that the company’s use of their works for AI training fell under the fair use doctrine, while highlighting ongoing legal uncertainties in the intersection of AI and copyright law.
View full story…
Latest AI legal news

Getty Images vs. Stability AI: Landmark Copyright Trial on Generative AI
Getty Images is pursuing legal action against Stability AI in the British High Court, alleging unauthorized use of copyrighted images for AI training, while Stability AI defends its practices under fair use, raising significant questions about copyright law, trademark rights, and the ethical implications of AI development.
View full story…

Bartz v. Anthropic Ruling: Fair Use in AI Training Addressed
US District Court Judge William Alsup ruled that while Anthropic PBC’s downloading of copyrighted works constituted infringement, its use for training large language models was deemed transformative under fair use, raising significant legal questions about data acquisition methods and the implications for AI developers and copyright holders.
View full story…

Proposed 10-Year Moratorium on State AI Regulations Sparks Controversy
House Republicans are pushing for a 10-year federal moratorium on state-level AI regulations, facing bipartisan opposition from state leaders and legal experts concerned about potential overreach and the implications for consumer protection.
View full story…
Latest AI legal news

AI Integration in Legal Profession Raises Accuracy and Ethical Concerns
Lawyers must navigate the complexities of AI use, particularly regarding accuracy and the doctrine of precedent, as high courts in jurisdictions like the UK, South Africa, and the US caution against reliance on AI-generated content due to risks of misinformation and hallucinations.
View full story…

Texas Enacts Responsible AI Governance Act to Regulate AI Systems
The Texas Responsible Artificial Intelligence Governance Act (TRAIGA), effective January 1, 2026, establishes regulations for AI systems in both public and private sectors, focusing on safety, transparency, and ethical considerations while promoting innovation, with enforcement by the Texas Attorney General and potential federal preemption looming.
View full story…