Introduction
The US Food and Drug Administration (FDA) has been actively involved in the evaluation of artificial intelligence (AI) technologies in regulatory submissions since 2016, with a particular focus on drug development. On January 6, 2025 [1] [2] [5] [9], the FDA released two draft guidance documents aimed at providing comprehensive recommendations for the integration of AI in the development and marketing of drugs, biologics [1] [2] [3] [4] [5] [6] [8], and medical devices [2] [3] [4] [5] [6] [7] [8] [9]. These documents are designed to ensure the safety, effectiveness [1] [2] [3] [5] [9], and reliability of AI applications in the medical field.
Description
The first document [2] [3], titled “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations,” outlines the necessary components for marketing submissions involving AI-enabled software [9]. It emphasizes a Total Product Life Cycle (TPLC) approach to ensure the safety and performance of AI systems over time [7], highlighting the importance of transparency and bias mitigation through comprehensive data collection across diverse demographic groups [9]. The guidance details recommendations for the design, development [2] [3] [5] [7] [8] [9], testing [5] [7], and post-market monitoring of AI-enabled devices [7], with a focus on enhancing transparency and controlling bias [7]. It underscores the importance of ongoing performance monitoring and delineates the responsibilities of device sponsors in this context [2].
The second document, “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products,” introduces a risk-based credibility assessment framework aimed at maintaining the reliability of AI model outputs throughout their lifecycle. This framework assists manufacturers and stakeholders in planning [6] [8], gathering [6], organizing [5] [6] [8], and documenting the necessary information to establish the credibility of AI-generated data [6]. Key components for manufacturers include detailing device inputs and outputs [4], clarifying the role of AI in achieving the device’s intended purpose [4], and outlining the characteristics and training of intended users [4]. The FDA highlights AI’s capability to expedite the development of safe and effective drugs and improve patient care [9], indicating an increasing reliance on AI in drug regulatory submissions [9].
Transparency is a critical aspect of this framework, requiring sponsors to provide public submission summaries that detail the use of AI [3], including model class, limitations [3] [4] [5] [9], development and validation datasets [3], statistical confidence levels [3], and plans for model updates and maintenance [3]. Manufacturers must also consider the intended use environment, the degree of automation compared to current standards of care [4], and develop a comprehensive risk assessment and management plan [4]. Additionally, data management information [4], including data collection methods [4] [5], dataset limitations [4], and a cybersecurity assessment focusing on AI-specific risks [4], must be provided [4].
The FDA emphasizes a seven-step process for establishing the credibility of AI model outputs [4], which involves defining the question of interest [4], assessing AI model risk [4], and documenting the results of the credibility assessment. This draft guidance incorporates feedback from over 800 comments received on two discussion papers published in 2023 and highlights the importance of maintaining and managing model changes to ensure ongoing suitability throughout the drug product’s life cycle. Sponsors planning to use AI for safety [3], effectiveness [1] [2] [3] [5] [9], or quality assessments in drug or biological products should consider the FDA’s credibility assessment framework early in the design and training of their AI models [3].
The guidance also details the documentation and information required for marketing submissions of devices with AI-enabled software functions [5], including strategies to enhance transparency and mitigate bias [5]. Sponsors must clearly describe AI components in marketing submissions and disclose essential information in device labels [5]. Validation testing is encouraged to characterize model performance [5], along with proactive performance monitoring plans in certain premarket submissions [5].
These guidances reflect the FDA’s rigorous scrutiny of AI applications in regulated medical products [4], underscoring the importance for companies to align their regulatory submissions with the new AI guidance documents to avoid potential delays in the approval process [6]. Early collaboration between AI experts and in-house legal counsel is recommended to ensure compliance with FDA regulations [3]. This framework is intended to support regulatory decision-making by ensuring that the data produced by AI models is reliable and valid [8], specifically in relation to the safety, effectiveness [1] [2] [3] [5] [9], or quality of drugs and biologics [1].
Public comments on this draft are invited until April 7, 2025 [3] [5], with a webinar scheduled for February 18, 2025 [3], to discuss the guidance [3] [5]. The FDA’s ongoing evaluation of AI technologies aims to ensure safe and effective use in drug development while encouraging innovation and maintaining rigorous standards for patient safety and product integrity.
Conclusion
The FDA’s release of these draft guidance documents marks a significant step in the integration of AI technologies within the regulatory framework for medical products. By establishing clear guidelines and emphasizing transparency, bias mitigation [9], and ongoing performance monitoring, the FDA aims to foster innovation while ensuring the safety and effectiveness of AI applications. These efforts are expected to streamline regulatory submissions, enhance patient care, and maintain high standards of product integrity, ultimately benefiting both manufacturers and consumers in the healthcare industry.
References
[1] https://www.nsf.org/life-science-news/fda-draft-guidance-on-use-of-ai-to-support-regulatory-decision-making-for-drug-and-biological-products
[2] https://aimedalliance.org/fda-issues-draft-guidance-on-ai-in-medical-devices-and-drug-development/
[3] https://www.sternekessler.com/news-insights/client-alerts/fda-issues-draft-guidance-documents-on-artificial-intelligence-for-medical-devices-drugs-and-biological-products/
[4] https://natlawreview.com/article/fda-issues-new-recommendations-use-ai-medical-devices-drugs-and-biologics
[5] https://nquiringminds.com/ai-legal-news/fda-releases-draft-guidance-on-ai-integration-in-drug-development/
[6] https://www.afslaw.com/perspectives/ai-law-blog/fda-issues-new-recommendations-use-ai-medical-devices-drugs-and-biologics
[7] https://www.jdsupra.com/legalnews/fda-issues-draft-guidance-for-the-use-2791276/
[8] https://www.jdsupra.com/legalnews/fda-issues-new-recommendations-on-use-7109524/
[9] https://www.manatt.com/insights/newsletters/health-highlights/f-da-releases-new-draft-guidance-on-use-of-ai-in-m




