Introduction
On January 6, 2025 [1] [7], the US Food and Drug Administration (FDA) released draft guidance on the use of artificial intelligence (AI) in regulatory decision-making for drugs [7], biological products [1] [2] [7], and medical devices [3] [4] [5] [6]. This guidance introduces a risk-based credibility assessment framework applicable throughout the total product lifecycle, emphasizing AI’s role in generating data for regulatory determinations on safety [2], effectiveness [1] [2] [4] [6], and quality [1] [2] [6] [7].
Description
On January 6, 2025 [1] [7], the US Food and Drug Administration (FDA) released draft guidance addressing the use of artificial intelligence (AI) in regulatory decision-making for drugs [7], biological products [1] [2] [7], and medical devices [3] [4] [5] [6]. This guidance outlines a risk-based credibility assessment framework that incorporates feedback from over 800 comments received on two discussion papers published in 2023. It is applicable throughout the total product lifecycle (TPLC) and emphasizes AI’s role in generating data that informs regulatory determinations regarding safety [2], effectiveness [1] [2] [4] [6], and quality [1] [2] [6] [7].
The guidance explicitly excludes AI applications related to drug discovery and operational streamlining that do not impact patient safety or drug quality [2]. It underscores the necessity for comprehensive AI policies that encompass risk evaluation [6], data management [3] [4] [5] [6], transparency [4] [6] [7] [8], validation [3] [4] [5] [6] [7], and cybersecurity [3] [6]. A detailed seven-step process for assessing AI models is provided [5], which includes defining the question of interest [3] [5], context of use (COU) [2] [7], assessing model risk [2] [5] [6] [8], developing a credibility assessment plan [2] [6] [8], executing that plan [2] [3] [8], documenting results [2] [3] [5] [8], and determining model adequacy for the intended context [5]. This structured approach aims to assist manufacturers in organizing and documenting information to support regulatory decision-making regarding AI model outputs [5].
Establishing credibility for AI model outputs is crucial [2], requiring detailed descriptions of model architecture, data sources [7], training processes [7], evaluation metrics [7], user characteristics [3] [5], intended use environments [3] [5], and the degree of automation compared to current standards of care [3]. Recognizing that AI models may evolve [2], the guidance mandates a lifecycle maintenance plan to monitor model performance and manage changes to ensure ongoing suitability. Manufacturers must report any changes that impact model performance and maintain detailed plans as part of their quality systems [2]. A comprehensive risk assessment and management plan [3] [5] [8], along with data management information and a cybersecurity assessment focusing on AI-specific risks [3], are also required. Transparency is emphasized, with sponsors required to disclose information about the AI model [8], its limitations [8], and how it will be maintained [8].
The level of detail required for the credibility assessment may vary based on the model’s risk and context of use [2]. High-risk models necessitate extensive disclosures, while lower-risk models may require less detailed information [7]. Early engagement with the FDA is encouraged to clarify expectations for credibility assessments and identify potential challenges [2]. The guidance suggests discussing the timing and submission of the credibility assessment report [2], which may be included in regulatory submissions or provided upon request [2].
Documentation and information required for marketing submissions of devices with AI-enabled software functions include strategies to enhance transparency and mitigate bias [4]. Sponsors must clearly describe AI components in marketing submissions and disclose essential information in device labels [4]. Clear explanations of data management are necessary to demonstrate the development and validation of AI-enabled devices [4], including data collection [4], independence of development and test data [4], reference standards [3] [4] [5], and representativeness [4]. Validation testing is encouraged to assess model performance for its intended use [4], along with proactive performance monitoring and a performance monitoring plan in certain premarket submissions [4].
Overall, the draft guidance aims to address a broad range of AI applications in drug [2], biological product [1] [2] [7], and medical device development while highlighting the need for a structured approach to credibility assessment [2]. It reflects the FDA’s current perspective on AI in regulated products, suggesting a potential shift towards increased patent protection and governance innovations in response to regulatory expectations [7]. The FDA is inviting public comments on the draft guidance until April 7, 2025 [4], and will host a webinar on February 18, 2025 [4], to discuss guidance related to medical devices [4]. This draft guidance marks a significant step in formalizing the integration of AI technologies in regulatory processes [4], establishing a comprehensive framework for risk assessment [4], model credibility [2] [3] [4] [6] [7] [8], and compliance with ethical and privacy standards [4]. Ongoing dialogue between the FDA and industry stakeholders will be essential in refining these guidelines and addressing future challenges in AI-driven drug development [4].
Conclusion
The FDA’s draft guidance on AI in regulatory decision-making represents a pivotal development in the integration of AI technologies within the healthcare sector. By establishing a comprehensive framework for risk assessment and model credibility [4], the guidance aims to ensure the safe and effective use of AI in drug [4], biological product [1] [2] [7], and medical device development. This initiative underscores the importance of transparency, data management [3] [4] [5] [6], and cybersecurity in AI applications, while also encouraging early engagement with the FDA to navigate potential challenges. As the FDA continues to refine these guidelines, ongoing collaboration with industry stakeholders will be crucial in addressing future challenges and fostering innovation in AI-driven drug development.
References
[1] https://www.nsf.org/life-science-news/fda-draft-guidance-on-use-of-ai-to-support-regulatory-decision-making-for-drug-and-biological-products
[2] https://www.jdsupra.com/legalnews/key-takeaways-from-fda-s-draft-guidance-8731549/
[3] https://natlawreview.com/article/fda-issues-new-recommendations-use-ai-medical-devices-drugs-and-biologics
[4] https://nquiringminds.com/ai-legal-news/fda-releases-draft-guidance-on-ai-integration-in-drug-development/
[5] https://www.afslaw.com/perspectives/ai-law-blog/fda-issues-new-recommendations-use-ai-medical-devices-drugs-and-biologics
[6] https://www.jdsupra.com/legalnews/fda-issues-draft-guidances-on-ai-in-6745299/
[7] https://natlawreview.com/article/ai-drug-development-fda-releases-draft-guidance
[8] https://www.jdsupra.com/legalnews/fda-issues-draft-guidance-documents-on-8643682/




