Introduction
The FDA Food and Drug Administration (FDA) has been actively involved in evaluating regulatory submissions that incorporate artificial intelligence (AI) technologies, particularly since 2016 [2]. With the increasing use of AI in drug development, the FDA has released draft guidance to address the integration of AI in the development of human and animal drugs and biological products. This guidance outlines a framework for assessing the risks associated with AI models in regulatory decision-making, emphasizing the importance of model credibility, risk assessment [1] [2] [6] [7], and compliance with privacy and ethical standards.
Description
The FDA has extensive experience in evaluating regulatory submissions that incorporate AI technologies [2], particularly since 2016 [2], when the use of AI in drug development surged [2]. On January 6, 2025 [7], the FDA released draft guidance addressing the development of human and animal drugs and biological products when AI models generate data that inform regulatory decisions regarding safety [1], effectiveness [1] [2] [4], or quality [1] [2] [4] [7]. This guidance outlines a seven-step framework for assessing the risks associated with AI models in regulatory decision-making and represents the FDA’s first formal recommendations on integrating AI in drug development. It emphasizes the importance of explaining the relative risk of AI models [3], particularly regarding potential biases in datasets that could affect the reliability of results [3]. The guidance was shaped through multi-stakeholder engagement [1], including input from sponsors [1], technology developers [1] [2] [4], and academia [1] [2] [4], and is based on the FDA’s experiences with over 500 drug and biologic submissions that incorporate AI elements [1].
The regulation of AI used in clinical trials will depend on a multi-tier analysis [5], focusing on the software’s composition [5], capabilities [5], and specific use within the study [5]. AI applications that assist in patient matching to studies or facilitate routine trial tasks [5], such as data cleaning [5], are generally not expected to be closely regulated [5]. However, AI models deemed high risk [5], which could affect patient safety or drug quality [7], may face stringent standards for acceptance and documentation [5]. This necessitates comprehensive details about the model’s architecture [7], data sources [5] [7], training methodologies [3] [7], and validation processes [5] [7]. A key aspect of the FDA’s approach is the risk-based credibility assessment framework [1], which assists sponsors in evaluating the reliability of AI models based on their context of use [1]. This evaluation considers the potential impact of AI outputs on safety and quality decisions [1], leading to an overall risk classification that informs the necessary depth of testing and post-deployment scrutiny [1].
A critical element in applying AI models in drug development and regulatory assessments is establishing model credibility [2], which refers to the trust in an AI model’s performance for specific applications [2]. The FDA has developed a risk-based framework to help sponsors evaluate and demonstrate the credibility of AI models [2] [4], ensuring that the outputs are reliable for their intended use [2]. This framework aligns with the FDA’s review processes for drug and biological product submissions that include AI components [2]. The guidance emphasizes the importance of defining the “question of interest,” which pertains to specific decisions or concerns addressed by the AI model [7], particularly in human clinical trials [7]. Ongoing monitoring is essential for maintaining the dependability of AI models throughout a drug product’s lifecycle [1], with regular evaluations and performance checks necessary to ensure models remain accurate [1], especially as conditions change in patient populations or manufacturing processes [1]. The FDA has also outlined specific outcomes if the credibility of an AI model is deemed insufficient by either the sponsor or the agency [6], addressing concerns about data drift and post-approval changes [6].
The FDA has established criteria for assessing model influence and decision consequence [5], which are critical in determining the regulatory status of AI tools [5]. For instance [5], a digital twins model was allowed for exploratory analysis in a phase 2 trial [5], contingent on the representativeness of its training dataset and the use of explainable AI techniques [5]. The agency encourages early dialogue with sponsors regarding AI credibility assessments and its application in drug development for both human and animal products [2]. Early engagement with the FDA is beneficial for sponsors [1], allowing them to present their AI methodologies and receive feedback on potential challenges [1]. Programs such as pre-IND meetings and INTERACT sessions facilitate open dialogue [1], helping refine proposals before significant resources are committed [1].
AI tools that qualify as medical devices [5], particularly those intended for diagnosis [5], treatment [5], or management of diseases [5], will be subject to FDA oversight [5]. This includes software that predicts patient outcomes or assists in participant selection based on health data [5]. Conversely [5] [7], AI used for administrative tasks is less likely to be classified as a medical device [5], though caution is advised in validating such functions [5]. The FDA has begun scrutinizing AI applications in clinical trials [5], as evidenced by recent Warning Letters issued for non-compliance with investigational plans [5]. Researchers must ensure that any AI software used complies with FDA regulations to avoid serious enforcement actions [5], including civil penalties or device seizures [5]. The guidance necessitates rigorous risk assessments [7], data fitness standards [7], and model validation processes [7], presenting opportunities for innovations that enhance AI credibility and regulatory compliance [7].
Privacy implications are significant when using AI in clinical research [5], particularly concerning data sourced from electronic health records and social media [5]. Compliance with HIPAA and state privacy laws is essential [5], especially when AI tools process protected health information [5]. Researchers should also consider the potential risks associated with data collection methods [5], such as web scraping [5]. Informed consent forms may need to include disclosures about the use of AI tools in studies [5], particularly if the data will be used to train AI algorithms [5]. Ethical considerations [1], including fairness [1], bias mitigation [1], and privacy [1], are critical in AI-driven drug development [1]. Integrating ethical reviews into model design and governance helps maintain trust among regulators [1], patients [1] [2] [4] [5] [7], and the public [1] [2], safeguarding the company’s reputation [1].
Overall, careful planning and validation of AI tools are crucial to mitigate risks and ensure compliance with regulatory frameworks [5]. The FDA’s guidance reflects insights gained from a diverse range of stakeholders, emphasizing the importance of robust principles and best practices in AI management. Stakeholders are encouraged to consider patenting innovations related to AI models [7], as the FDA’s transparency requirements may challenge the protection of trade secrets [7]. While some aspects of AI models may remain confidential [7], those used for decision-making will likely require disclosures that could compromise trade secret status [7]. Securing patent protection can help safeguard intellectual property while meeting FDA requirements [7]. Engaging with the FDA regarding specific contexts of use is advisable for sponsors uncertain about compliance with future requirements [7]. Strategic governance and resource allocation are essential for adopting AI in regulated environments [1], with board members needing to balance innovation opportunities with the need to maintain product integrity and patient safety [1], aligning with the FDA’s risk-based approach [1]. The guidance also details the documentation and information required for marketing submissions of devices with AI-enabled software functions [6], including strategies to enhance transparency and mitigate bias [6]. Sponsors are instructed to clearly describe the AI components of their devices in marketing submissions and disclose essential information in device labels [6]. Clear data management explanations are required to demonstrate how the AI-enabled device was developed and validated [6], including information on data collection [6], independence of development and test data [6], reference standards [6], and representativeness [5] [6]. Validation testing is encouraged to characterize model performance for its intended use [6], along with proactive performance monitoring and a performance monitoring plan in certain premarket submissions [6].
The FDA is inviting public comments on this draft guidance until April 7, 2025 [6], and will host a webinar on February 18, 2025 [6], to discuss the guidance related to medical devices [6].
Conclusion
The FDA’s draft guidance on AI in drug development marks a significant step in formalizing the integration of AI technologies in regulatory processes. By establishing a comprehensive framework for risk assessment, model credibility [1] [2] [4] [6] [7], and compliance with ethical and privacy standards, the FDA aims to ensure the safe and effective use of AI in drug development. This guidance not only provides clarity for stakeholders but also encourages innovation while maintaining rigorous standards for patient safety and product integrity. The ongoing dialogue between the FDA and industry stakeholders will be crucial in refining these guidelines and addressing future challenges in AI-driven drug development.
References
[1] https://avancer.co/our-insights/articles/articles/harnessing-artificial-intelligence-in-drug-and-biological-product-development-a-strategic-overview-of-the-fdas-draft-guidance-for-stakeholders
[2] https://www.fda.gov/news-events/press-announcements/fda-proposes-framework-advance-credibility-ai-models-used-drug-and-biological-product-submissions
[3] https://www.pharmavoice.com/news/fda-ai-draft-guidance-drug-development/737081/
[4] https://www.aiwire.net/2025/01/06/fda-proposes-framework-to-advance-credibility-of-ai-models-used-for-drug-and-bio-product-submissions/
[5] https://www.jdsupra.com/legalnews/fda-s-evolving-regulatory-framework-for-6284613/
[6] https://www.jdsupra.com/legalnews/fda-issues-draft-guidance-documents-on-8643682/
[7] https://natlawreview.com/article/ai-drug-development-fda-releases-draft-guidance




