The Evolving FDA Regulatory Landscape of Artificial Intelligence

March 29, 2023

AI


As artificial intelligence technology evolves on a seemingly minute-by-minute basis, the U.S. FDA's regulatory approach continues to evolve in an effort to keep pace.

"The promise of artificial intelligence in medicine is to provide composite, panoramic views of individuals' medical data; to improve decision making; to avoid errors such as misdiagnosis and unnecessary procedures; to help in the ordering and interpretation of appropriate tests; and to recommend treatment." - Eric Topol, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again

Whether it's asking ChatGPT to rewrite your resume, getting behind the wheel of a self-driving vehicle, trusting the cybersecurity protocols of the cloud-storage platform to which you're saving your family vacation photos, or teeing off on the first tee of your local country club,[1] if you're reading this article, it is safe to assume that you interact with artificial intelligence and machine-learning (AI/ML) products on a daily basis.

Given the ubiquity with which AI/ML touches so many aspects of our lives, it should come as no surprise that the medical device industry has seen a rapid expansion of applications of AI/ML. Whether AI/ML functions as Software as a Medical Device (SaMD), or whether it is used in the development, clinical investigation, post-market data analysis, or quality control of medical products and their quality management systems, the proliferation of AI/ML in the healthcare industry is showing no signs of slowing down.

In an attempt to ensure an adequate regulatory framework necessary to evaluate these products exists, the U.S. Federal Food and Drug Administration (FDA) has been active in developing action plans, engaging industry, issuing guidance documents, and devoting an increasing number of resources to the oversight of this growing discipline. This article summarizes FDA's activities to date and highlights considerations medical device manufacturers should keep in mind as they develop AI/ML technologies.

[1] Yes, the driver in your golf bag was likely developed using artificial intelligence.

AI/ML in Medical Devices

medical-AdobeStock_557065462

Artificial intelligence is defined as "a branch of computer science, statistics, and engineering that uses algorithms or models to perform tasks and exhibit behaviors such as learning, making decisions, and making predictions."[2] Machine learning is "a subset of AI that allows ML models to be developed by ML training algorithms through analysis of data, without models being explicitly programmed."[3] AI/ML-enabled medical devices are those products intended to treat, cure, prevent, mitigate, or diagnose disease that use AI/ML, in whole or in part, to achieve their intended medical purpose. AI/ML-enabled devices have the potential to reduce the cost of medical intervention, to improve efficiency and early-detection capabilities of diagnostics, and to improve health outcomes and evaluation of patient progress through real-time analytics.

FDA, which regulates and grants marketing authorization for medical devices via one of three premarket pathways (premarket approval (PMA), De Novo Classification, or 510(k) clearance), approved its first AI/ML-enabled medical device in 1995 with the PMA approval of the PAPNET Testing System (P940029), a semi-automated test indicated to aid in the rescreening of cervical smears previously reported as negative and intended to detect cervical epithelial abnormalities, to be used to support clinician decision-making. Two decades later, FDA granted De Novo classification of the first autonomous AI/ML-powered diagnostic platform, the IDx-DR software algorithm, intended for early detection of diabetic retinopathy.

Today, FDA maintains a database of AI/ML-enabled medical devices, which it periodically updates, in order to showcase the growing number of medical devices authorized by the agency. As of the most recent update, 521 AI/ML-enabled medical devices have been granted marketing authorization by FDA, with the vast majority of devices falling into radiology and cardiovascular devices categories.

[2] Shawn Forrest, CDRH Digital Health Center of Excellence, Artificial Intelligence/ Machine Learning (AI/ML)-Enabled Medical Devices: Tailoring a Regulatory Framework to Encourage Responsible Innovation in AI/ML, https://www.fda.gov/media/160125/download (Adapted from IMDRF Artificial Intelligence Medical Devices Key Terms & Definitions Final document, posted May 9, 2022 at: https://www.imdrf.org/documents/machine-learning-enabled-medical-devices-key-terms-and-definitions).

[3]  Id.

FDA's Regulatory Approach to AI/ML

FDA AdobeStock_414656141_Editorial_Use_Only

FDA is on-record as making AI/ML oversight and rulemaking a focus area, with the most significant efforts emanating from the Center for Devices and Radiological Health (CDRH) Digital Health Center of Excellence, originally launched in 2020.

FDA's most significant foray into the AI/ML arena came in April 2019, when the agency issued the "Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)," a discussion paper and request for feedback issued to industry. In the discussion paper, FDA elicited stakeholder feedback on a number of topics, including AI/ML-enabled SaMD premarket review, regulatory evaluation of post-market modifications to AI/ML-enabled SaMD, and quality considerations, including Good Machine Learning Practices (GMLP) principles. FDA stressed the importance of consumer transparency and effective ongoing real-world performance monitoring for AI/ML-enabled devices.

In response to a significant volume of stakeholder feedback to the 2019 proposed framework, FDA published its "Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan" in January 2021. In the action plan, FDA committed to a five pillars of its ongoing approach to regulating AI/ML-Based SaMD:

  1. Tailored regulatory framework for AI/ML-based SaMD. FDA recognized the need for, and committed to the promulgation of guidance to support, a new paradigm for regulatory oversight of AI/ML-enabled devices, most notably with regards to postmarket change control.
  2. GMLP development. FDA committed to continuing efforts, both domestically and in concert with the international regulatory community, toward the development of harmonized GMLP principles that will guide product development and maintenance.
  3. Patient-centered approach incorporating transparency to users. Leaning on learnings from the agency's Patient Engagement Advisory Committee meeting on AI/ML-enabled devices, FDA stressed the need for and committed to creating forums for the dissemination of information about AI/ML-enabled devices to the general public. Information relevant to FDA and patient-engagement stakeholders includes information about the source of data used in algorithm training and information to ensure the benefits, risks, and limitations of AI/ML-enabled devices are easily understood by all users.
  4. Regulatory science methods related to algorithm bias and robustness. FDA acknowledged that, while not unique to AI/ML-enabled devices, there are unique concerns related to bias and generalizability of use associated with these devices and the data forming the basis for their functions. FDA committed to supporting regulatory science research methods to ensure factors such as race, ethnicity, and socio-economic status are taken into account in the development and maintenance of AI/ML-enabled devices.
  5. Incorporation of real-world performance metrics. As real-world data continues to be an important topic in medical device development and regulation, FDA acknowledges that importance is only heightened based on the continuous learning and real-world-data-based algorithms of AI/ML-based devices. FDA committed to piloting real-world performance monitoring on a voluntary basis in order to develop uniform real-world performance principles applicable to AI/ML-enabled device oversight.

Since announcing its 2021 action plan, FDA has issued a number of guidance documents aimed at fulfilling the commitments related to the regulatory oversight of AI/ML-enabled devices contained therein. Key guidance documents are discussed below.

This draft guidance addresses the use of computational modeling and simulation in medical device submissions, proposing a framework by which this type of information can be gathered and presented in a manner that ensures reliability and credibility sufficient to either: (1) support regulatory approval or clearance; or (2) support the use of computational modeling and simulation within the medical device software itself. The draft guidance provides a generalized framework for assessing credibility of computational modeling in a hypothetical nine-step process. The principles of this guidance can be applicable both to the use of AI/ML in computational modeling and simulation activities associated with product development, as well as in supporting the premarket approval or clearance of computational modeling and simulation functions of AI/ML-enabled devices.

This draft guidance proposes a risk-based approach to computer software assurance activities related to features, functions, and operations of production and quality system software that present a high process risk (and, accordingly, medical device risk). The draft guidance provides insights into considerations manufacturers should account for in qualification and validation of quality system and manufacturing software, as well as in the documentation rigor that should be employed in these activities. As AI/ML-enabled product development continues to become more and more mainstream, these risk-based approaches offer insight into satisfactory means of testing and documenting these systems.

The final Clinical Decision Support (CDS) Software guidance was received by industry as a significant departure from the original draft guidance of the same name (originally issued in September 2019). Centrally, the final guidance provides the rubric by which FDA will determine whether a software function is considered a CDS function (exempt from FDA oversight) or a medical device function (requiring the manufacturer to comply with FDA regulations and premarket notification or approval requirements). The approach described in the final guidance is comprised of four criteria (where a software function is excluded from the statutory definition of a medical device only by meeting all four criteria): (i) the software function is not intended to acquire, process, or analyze a medical image or signal from an in vitro diagnostic or a pattern or signal from a signal acquisition system; (ii) the software function is intended for the purpose of displaying, analyzing, or printing medical information; (iii) the software function is intended for the purpose of supporting or providing recommendations to an HCP about prevention, diagnosis, or treatment; and (iv) the software function is intended for the purpose of enabling a healthcare practitioner to independently review the basis for the recommendations (in a manner that the practitioner does not rely primarily on any software recommendations to make a diagnosis or treatment decision). Developers of software functions, particularly those powered by AI/ML who previously assessed their products to be exempt from FDA oversight, should carefully evaluate each software function to ensure their product does not inadvertently assume an intended use classifying it as a medical device.

The landscape of FDA oversight of AI/ML-enabled medical devices is dynamic, and the only constant that industry should expect in the coming years is change. Evidence of this change can be found in the recently announced CDRH Proposed Guidances for Fiscal Year 2023. Draft guidance topics for the coming fiscal year most notably includes "Marketing Submission Recommendations for A Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions." Additional guidances related to cybersecurity, content of premarket submissions for device software functions, and evaluation of sex-specific and gender-specific data in medical device clinical studies are sure to impact manufacturer activities related to the development, regulatory submission, and maintenance of AI/ML-enabled medical devices.

AI/ML-Enabled Product Manufacturer Considerations

AI/ML regulatory principles are seemingly in greater flux and progressing at a greater rate than any other area of FDA regulatory oversight. In a discipline as dynamic as this, the most important step manufacturers can take is to pay attention to the shifting sands: monitor FDA's guidance promulgation and rulemaking to ensure you're up-to-date on current requirements; keep tabs on FDA approvals and clearances of AI/ML-enabled devices to gain insight into FDA's evolving approach to AI/ML regulatory oversight; and track FDA enforcement actions to help define potential landmines you can avoid in your own product development.

In addition to continually staying abreast of FDA activity in this space, there are several additional practices manufacturers of AI/ML-enabled devices and developers relying on AI/ML in product development can employ:

  • Clearly define the role of AI/ML in your medical device

Early AI/ML functions were limited to "locked algorithms" - those algorithms with static code, unchanged since programming (and, likely, regulatory approval or clearance) that requires proactive developer input to change. More complex AI/ML-enabled systems may employ adaptive systems that "learn" and self-adapt based on data inputs and analysis. The disparate regulatory considerations between these two types of AI/ML systems, employing a risk-based approach, can be significant. Defining the role and scope of AI/ML in your medical device early in product development can help manufacturers architect their quality and risk management approaches accordingly.

  • Build GMLP into your quality processes from the infancy of product development

Medical device manufacturers are already familiar with Current Good Manufacturing Practice and Good Laboratory Practice regulations. Now, there is a new good practices paradigm in town: Good Machine Learning Practices. While there is no formal regulation on GMLP or harmonized standard to adhere to, rapid international development of GMLP standards is underway. In October 2021, FDA, in coordination with Health Canada and the UK Medicines and Healthcare products Regulatory Agency issued their Good Machine Learning Practice for Medical Device Development: Guiding Principles. The document provides the guiding principles the agencies jointly believe will "help promote safe, effective, and high-quality medical devices that use [AI/ML]." Manufacturers of AI/ML-enabled medical devices are encouraged to familiarize themselves with these principles and implement quality processes that align to them.

  • Develop total product lifecycle approaches that clearly define post-market iterative activities

The traditional paper document-based regulatory approach doesn't clearly lend itself to AI/ML-enabled medical device development processes, begging the question: do these novel devices warrant novel regulatory approaches? FDA's Digital Health Software Precertification (Pre-Cert) Pilot Program, completed in September of 2022, was aimed at informing the development of adaptive regulatory approaches, many of which may be applicable to AI/ML-enabled device (particularly adaptive devices') development. While FDA acknowledged that legislative change may be required to implement significant changes, the agency's conclusions demonstrate openness to new paradigms in total product lifecycle processes.

Additionally, various centers within FDA have communicated openness to unique regulatory and documentation approaches. For example, at a recent symposium, Center for Drug Evaluation and Research (CDER) Office of New Drug Products director Lawrence Yu expressed openness to the use of cloud-based regulatory assessment platforms, which could facilitate AI/ML-enabled search functions to streamline regulatory assessment and review and promote greater collaboration between FDA and industry.

Engaging FDA on unique approaches to product development, iteration, and quality system management early in product development will be key to garnering regulatory support.

  • Make your voice heard in guidance development and rulemaking

Pursuant to the commitments made in its 2021 action plan, FDA has regularly solicited stakeholder feedback as it develops and promulgates new regulation and guidance. Just last month, CDER issued a discussion paper, "Artificial Intelligence in Drug Manufacturing," seeking comment on policy development related to AI/ML-enabled manufacturing methods of pharmaceuticals. FDA has demonstrated openness to industry feedback on AI/ML-related topics to-date, and these feedback forums provide stakeholders with an opportunity to influence policymaking. Comments on the discussion paper are due by May 1, 2023.

***

While it may seem like we've witnessed a lifetime's worth of action regarding AI/ML in healthcare in just the past several years, we undoubtedly have only just glimpsed the tip of the iceberg. In addition to the regulatory issues related to approval, clearance, and post-market maintenance of AI/ML-powered medical devices, a litany of other new and evolving issues of all kinds are presented by the proliferation of AI/ML products. New and evolving issues of tort liability may be presented as AI/ML-enabled clinical decision-making takes root. Issues of privacy and cybersecurity, including the treatment of the volumes of data (and their derivatives) collected by AI/ML-enabled systems, will be presented. Ethical issues-notably those surrounding the biases related to sex/gender, ethnicity, age, and other variables in algorithm inputs and outputs-will continue to require addressing. With the significant number of unanswered questions that remain, and with recent history demonstrating the rapid change of the AI/ML regulatory environment, manufacturers are encouraged to continue to consume new information as it comes available and participate in the discussions driving regulatory change.

Don't know where to start with your artificial intelligence or machine-learning technology?

Contact us. Our firm has extensive experience supporting the pre- and post-market legal and regulatory activities associated with commercializing AI/ML medical devices.

Contact Us Information provided on this website is not legal advice. Communications sent to or from this site do not establish an attorney-client relationship. © 2023 Gardner Law. All Rights Reserved.