Introduction

The following text outlines the significant strides made in the ethical and secure deployment of artificial intelligence (AI) across various sectors, particularly healthcare, following the signing of Executive Order 14110 by President Biden. This initiative aims to promote competition, safeguard civil liberties [1] [2], and ensure national security and US competitiveness in AI [1].

Description

On October 30, 2023 [1] [2], President Biden signed Executive Order 14110 [1] [2], establishing a framework for the ethical and secure deployment of AI across various industries [1] [2], including healthcare [1] [2]. This directive outlines policy goals aimed at promoting competition in the AI sector [2], safeguarding civil liberties and national security from AI-enabled threats [2], and ensuring US competitiveness in AI [1] [2]. Federal agencies are required to appoint chief AI officers (CAIOs) and develop guidelines for the safe and trustworthy use of AI systems [2], particularly in healthcare [2].

In early 2024 [2], the integration of AI into clinical settings became increasingly prevalent [1], with scribe and clinical-decision support tools gaining traction [1]. The Department of Health and Human Services (HHS) appointed a CAIO to draft AI policies and oversee their implementation in public health systems [1] [2]. The FDA and the Centers for Medicare & Medicaid Services (CMS) formed specialized AI task forces led by these officers [1] [2], while the National Institute of Standards and Technology (NIST) updated its AI Risk Management Framework to provide guidelines for identifying and mitigating risks associated with AI technologies [2].

In response to the Executive Order [2], HHS released a comprehensive AI Strategy focused on advancing AI in healthcare [2], fostering partnerships within the health ecosystem [2], and ensuring responsible AI use [1] [2]. This included a plan for promoting ethical AI use in public benefits administration by state [2], local [1] [2], tribal [2], and territorial governments [1] [2]. The Agency for Healthcare Research and Quality (AHRQ) initiated the AI in Healthcare Safety Program to enhance patient safety [1] [2], particularly in reducing medication errors and improving clinical decision support systems [2]. AHRQ also implemented AI safety audits for healthcare facilities using AI tools to ensure compliance with federal standards [2].

On December 4, 2024 [2], the FDA issued Final Guidance on Predetermined Change Control Plans (PCCPs) for AI-Enabled Device Software Functions [1] [2], allowing manufacturers to include pre-authorized modifications in marketing submissions for AI-enabled devices [2]. The FDA’s Center for Drug Evaluation and Research (CDER) plans to issue draft guidance on using AI-generated data to support regulatory decisions for drugs and biological products [2], reflecting a commitment to a regulatory framework that promotes safe AI integration in medical products [1] [2].

HHS finalized the HTI-1 Final Rule [1] [2], enhancing interoperability in health information technology and requiring developers to disclose information about algorithms used in certified Health IT products [2], including their intended use and limitations [2]. This aims to build trust and ensure informed decision-making in clinical settings [2]. In September 2024 [1] [2], the Federal Trade Commission (FTC) launched “Operation AI Comply,” targeting deceptive or unfair AI practices across sectors [1], including healthcare [1] [2], highlighting the federal government’s increasing scrutiny of AI applications to ensure ethical and transparent deployment [2].

Legislative efforts included the introduction of the Healthcare Enhancement And Learning Through Harnessing Artificial Intelligence Act (Health AI Act) in February 2024, proposing a grant program for research on generative AI in healthcare [1] [2], overseen by the National Institutes of Health (NIH) to improve healthcare practices and reduce administrative burdens [2]. The House Task Force on AI [1] [2], established in February 2024 [2], released a report in December 2024 [2], outlining principles and recommendations for AI regulation [2]. The report emphasizes AI’s potential benefits in healthcare [2], such as enhancing drug development and clinical decision-making [2], while addressing challenges like regulatory uncertainty and data privacy concerns [2].

The global landscape also saw significant developments, with the World Health Organization releasing guidance on the ethics of large language models [1], and the European Parliament passing the EU AI Act [1], establishing a risk-based regulatory framework for AI [1]. The year concluded with a focus on responsible AI use in healthcare [1], driven by both federal and state-level actions aimed at ensuring ethical deployment and addressing privacy [1], security [1] [2], and bias concerns [1].

Conclusion

The initiatives and regulatory measures introduced in 2023 and 2024 underscore a concerted effort to integrate AI responsibly and securely into healthcare and other sectors. These actions are expected to enhance patient safety, improve clinical decision-making [2], and foster innovation while addressing ethical, privacy [1] [2], and security concerns. The global collaboration and legislative efforts further highlight the importance of a unified approach to AI governance, ensuring that AI technologies are deployed in a manner that benefits society as a whole.

References

[1] https://www.jdsupra.com/legalnews/healthy-ai-2024-year-in-review-2184474/
[2] https://www.lexology.com/library/detail.aspx?g=78a7c4af-3299-4095-82e2-c9440b8cb6ed