The US Food and Drug Administration (FDA) recently issued its draft guidance, Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products, on January 6, 2025. The draft guidance provides recommendations on how FDA plans to apply a risk-based credibility assessment framework to evaluate the use of artificial intelligence (AI) models that produce information or data intended to support regulatory decision-making regarding safety, effectiveness, or quality of drugs and biological products.

FDA’s draft guidance: Applicable to broad (but not all) types of AI uses for drugs and biologics

This draft guidance provides recommendations for the use of AI in the product life cycle of drugs and biologics, where the specific objective of using AI is to produce information or data that will support regulatory decision-making regarding safety, effectiveness, or quality. The reference to “regulatory decision-making” in this draft guidance covers both regulatory determinations made by FDA concerning an application or supplement, as well as actions taken by sponsors and other interested parties to conform with FDA requirements, such as compliance with good manufacturing practices. The recommendations in this draft guidance also apply to regulatory decision-making for medical devices intended to be used with drugs. Read more about FDA’s recent draft guidance on AI-enabled device software functions in our previous alert.

This draft guidance applies to a wide range of AI applications, such as:

  • Using AI to support a specific development program under an Investigational New Drug Application
  • Using AI in novel clinical trial designs
  • Qualifying a new drug development tool that uses AI
  • Using AI-enabled digital health technology in the context of a drug development program
  • Using AI in pharmacovigilance
  • Using AI in pharmaceutical manufacturing
  • Using model-informed drug development that incorporates AI
  • Using AI in a study using real-world data to produce real-world evidence

While the draft guidance is broad, FDA carved out two types of AI applications that are not covered by this draft guidance. Specifically, this draft guidance does not address AI models when used in drug discovery, or when used to streamline operations (eg, drafting a regulatory submission) that do not impact patient safety, drug quality, or the reliability of results from nonclinical or clinical studies.

Overview of FDA’s risk-based credibility assessment framework

FDA proposes utilizing a risk-based credibility assessment framework to establish and evaluate the credibility of AI model outputs for a particular context of use (COU). Ultimately, the purpose of this framework is to establish trust in the AI model by collecting credibility evidence regarding the performance of an AI model for a particular COU.

FDA outlines the risk-based credibility assessment framework in seven general steps:

Step 1: Define the question of interest that will be addressed by the AI model. The proposal should define the specific question, decision, or concern being addressed by the AI model.

Step 2: Define the COU for the AI model. The proposal should describe in detail what will be modeled, and how model outputs will be used. The COU should note whether other information (eg, animal or clinical studies) will be used in conjunction with the model output to answer the question of interest, or if the AI model is the sole step in answering the question of interest.

Step 3: Assess the AI model risk. The sponsor or interested party should evaluate the risk posed by the AI model. The draft guidance recommends consideration of two factors –“model influence” and “decision consequence” – which together inform the model risk. Model risk includes the risk of the AI model’s output leading to an incorrect decision, which may in turn produce an adverse outcome.

  • In general, FDA notes that AI models and use-cases in which AI is used to make a final determination on the question of interest without human intervention will be considered as higher risk. This is particularly true where the output of the AI would impact safety issues, such as identifying which patients require certain types of monitoring, or medical interventions during a clinical investigation.
  • Lower risk areas might include scenarios where the intent and design of the AI requires a human to review and evaluate the output before making a decision. For example, where the AI identifies manufacturing batches that are out of specifications, but requires a human to review, confirm, and document that finding before implementing a corrective and preventive action (CAPA) plan, or other corrective action.

Step 4: Develop a plan to establish the credibility of AI model output within the COU. The plan should include the sponsor’s proposed credibility assessment activities based on the question of interest, COU, and model risk. These activities should generally be tailored to the specific COU and commensurate with model risk. The plan should include, as applicable, a description of the model, description of the data used to develop the model, description of the training and tuning data used to train the model, description of the test data used to evaluate the model, description of the model evaluation process, as well as other points further detailed in the draft guidance.

Step 5: Execute the plan. This step involves carrying out the activities set forth in the credibility assessment plan. For this step, the draft guidance notes that FDA engagement is important to set expectations regarding the appropriate activities for the proposed model and address potential challenges.

Step 6: Document the results of the credibility assessment plan and discuss deviations from the plan. A report should be created documenting the results of the credibility assessment plan. The report should include a description of the results from the previous steps. The credibility assessment report is intended to provide information that establishes the credibility of the AI model for the COU, and describe any deviations from the credibility assessment plan. During early consultation with FDA, the sponsor is advised to discuss with FDA on whether the report needs to be submitted to FDA, and if so, when and where to submit the report. The specific timing and methodology expectations are not set forth in the draft guidance.

Step 7: Determine the adequacy of the AI model for the COU. At the end, if model credibility is not adequately established for the COU, the draft guidance discusses five potential approaches that could be subsequently taken:

  1. The sponsor may introduce additional evidence that may supplement the AI model in order to answer the question of interest (ie, downgrade the model influence)
  2. The sponsor may increase the rigor of the credibility assessment activities or augment the model’s output by incorporating additional development data
  3. The sponsor may implement appropriate risk-mitigating controls
  4. The sponsor may modify the modeling approach, or
  5. The sponsor may reject the model’s COU or make further modifications in an iterative manner.

Life cycle maintenance may be necessary for some AI applications

FDA recognizes in this draft guidance that some AI models may evolve over time or across deployment environments, either incidentally or deliberately, to adapt to new data or conditions. Such changes may affect the performance and suitability of the AI model for its intended COU. The draft guidance states that the life cycle maintenance plan should describe the planned activities to monitor and ensure the model’s performance and its fitness for use throughout its life cycle for the COU. This concept is especially relevant when AI is used in the pharmaceutical manufacturing process, where changes are expected to be evaluated as part of the manufacturer’s existing change management system.

FDA expects certain changes that impact the model’s performance to be reported to FDA in accordance with regulatory requirements. FDA also expects manufacturers to maintain detailed plans regarding life cycle maintenance as a component of the manufacturer’s pharmaceutical quality system, and FDA also expects a summary to be included in the marketing application.

While the life cycle management plan is not an exact replica of the predetermined change control plan (PCCP) requirements for AI-enabled medical devices, the principles are similar. Applicants and other interested parties may want to review and consider FDA’s guidance and policies related to medical device PCCPs in developing life cycle maintenance, especially where the AI is used for the development of drug-device combination products.

The importance of early engagement with FDA

The draft guidance acknowledges that the use of AI in the drug product life cycle is broad and rapidly evolving, and strongly encourages sponsors and other interested parties to engage early with FDA to discuss their specific AI model development and use plans.

Early engagement may help establish expectations regarding the appropriate credibility assessment activities based on model risk and COU, and identify and mitigate potential challenges. These interactions may also facilitate timely and efficient review of the AI model and its supporting evidence by the Agency.

Further, the draft guidance recommends that, when engaging with the Agency, sponsors and other interested parties should provide information that establishes the credibility of the AI model. The draft guidance notes that the level of detail and documentation required regarding the AI model and its credibility assessment may vary depending on the model risk and the COU. For certain lower-risk models, FDA may request minimal information in the categories described above. For higher-risk models, FDA may request extensive information in the categories described above and additional information, as applicable, depending on the COU.

“Whether, when, and where” to submit a credibility assessment report

Finally, the draft guidance recommends that sponsors and other interested parties discuss with FDA “whether, when, and where” to submit the credibility assessment report to the Agency. The credibility assessment report is intended to provide information that establishes the credibility of the AI model for the COU. FDA states that the report should describe any deviations from the credibility assessment plan.

The credibility assessment report may, as applicable, be (1) a self-contained document included as part of a regulatory submission or in a meeting package, depending on the engagement option, or (2) held and made available to FDA upon request (eg, during an inspection). The draft guidance also contains a detailed chart describing different options other than formal meetings for engaging with FDA on issues related to AI model development and use, depending on the model risk and the COU.

Next steps

This draft guidance aims to cover a wide spectrum of AI applications that could be used throughout the product development life cycle of a drug or biological product. The risk-based credibility assessment framework is a helpful starting point, but still leaves considerable uncertainty. For instance, the draft guidance expressly leaves the applicability, timing, and submission methodology of credibility assessment plan and report to the discretion of FDA. It is possible that with more experience with AI models, FDA may refine and streamline the process.

Stakeholders who wish to provide feedback on the draft guidance may submit comments through April 7, 2025.

DLA Piper is here to help

DLA Piper’s team of FDA and AI lawyers and data scientists assist organizations in navigating the complex workings of their AI systems to ensure compliance with current and developing regulatory requirements. We continuously monitor updates and developments arising in AI and its impacts on industry across the world.

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

Gain insights and perspectives that will help shape your AI Strategy through our newly released AI ChatRoom series.

For further information or if you have any questions, please contact any of the authors.



Source link