What does FDA think of AI?
February 13, 2024by Mark Gardner
Speaking at CES 2024 earlier this year, U.S. Food and Drug Administration (FDA) Commissioner Robert Califf expressed significant concerns about the FDA’s ability to effectively regulate artificial intelligence (AI) in the healthcare sector. While acknowledging that AI presents significant opportunities in terms of both improving patient outcomes and giving doctors and nurses more time to devote to patient care, he also stressed that effective regulation is going to be essential in the years to come.
A ”Community of Entities” Will Be Necessary to Effectively Regulate AI in Healthcare
Crucially, however, Califf stated that regulating AI in the healthcare sector likely isn’t a job that the FDA can manage on its own. This, he said, is due in large part to the rate of innovation and the current lack of regulatory oversight, not just within the FDA, but government-wide. As with several other recent technologies, the development of AI has far outpaced the development of the regulatory environment. At this point, the FDA (among other regulatory agencies) is already far behind. Catching up presents a substantial—and likely unachievable—undertaking for the FDA. Recognizing that this is the case, Califf suggested that a “community of entities” will be needed to ensure that AI developers—and AI itself—do not fail to address the risks that relying on technology to make diagnostic and treatment decisions necessarily presents.
Part of the issue, said Califf, has to do with the current nature of artificial intelligence. While predictive AI platforms tend to focus on finding the best outcomes in immediate circumstances, “what happens immediately can often be the opposite of what’s good in the long term” in healthcare. Thus, while AI may be useful in certain healthcare-related applications, it may not be the best tool for making care-related decisions in doctors’ offices, clinics, hospitals and other healthcare facilities—at least as it exists today.
This presents a significant concern, and this concern underpins the FDA’s focus on regulating AI. While it appears that the FDA is prepared to step in, knowing when it needs to step in presents a significant hurdle to effective regulation. Then, there is the matter of regulating effectively before the FDA’s efforts become outdated by new AI-driven technologies and new machine “learning.” Of course, privacy is always a concern in healthcare as well, and AI presents its own unique set of privacy-related implications.
Ultimately, Califf suggested that a collaborative approach is likely to be necessary, at least for the foreseeable future. This includes not only collaboration between the FDA and the private sector, but also collaboration between human healthcare providers and AI. Until we reach a point where AI is capable of consistently delivering positive patient outcomes independently, doctors and nurses will need to review AI’s conclusions and continue to lend their expertise in all healthcare settings
The 21st Century Cures Act and Related Guidance for AI/ML Software Developers and Medical Device Manufacturers
In 2016, Congress limited the types of software that are subject to FDA oversight by enacting Section 3060 of the 21st Century Cures Act, which amended Section 520(o) of the Food, Drug and Cosmetic Act (FDCA) to exclude certain medical software functions, including clinical decision support (CDS) software, from the definition of a medical device. Under Section 520(o), CDS software is not subject to regulation as a medical device if it meets all of the following criteria:
The software is not intended to acquire, process, or analyze a medical image or a signal from an in vitro diagnostic device or a pattern or signal from a signal acquisition system;
The software is intended to display, analyze, or print medical information about a patient or other medical information, like clinical practice guidelines;
The software is intended to support or provide recommendations to a healthcare provider about prevention, diagnosis, or treatment of a disease or condition; and,
The software is intended to enable healthcare providers to independently review the basis for the software’s recommendations so they do not primarily rely on these recommendations when making clinical diagnoses or treatment decisions.
Following the enactment of Section 3060, the FDA proposed draft guidance for CDS software in September 2019. However, in September 2022, the FDA issued final guidance on CDS software that was significantly different from its original proposal in 2019. Among other things, the FDA’s final guidance focuses on explaining how FDA interprets the statutory criteria that exclude a software product from the definition of a medical device, which FDA has termed “Non-Device CDS.”
In March 2023, the FDA issued additional draft guidance for artificial intelligence/machine learning (AI/ML) enabled device software functions. This draft guidance builds on the FDA’s existing regulatory framework and seeks to clarify the types of modifications that should be included in a Predetermined Change Control Plan submitted to the FDA for review and approval. Notably, the FDA is also seeking to use its initial proposed framework not only for AI/ML-enabled Software as a Medical Device (SaMD), but for all AI/ML-enabled device software functions, including software functions that are part of, or that control, hardware medical devices. Thus, despite Section 3060’s limiting effect, AI software developers and medical device manufacturers are likely to face significant FDA oversight going forward.
Conclusion
Artificial intelligence is changing healthcare in the United States, and the FDA is currently in the process of determining how it can regulate the use of AI in healthcare effectively. As AI platforms continue to “learn” and evolve, the challenges facing the FDA are only going to continue to grow, and it will almost certainly rely heavily on industry participants to help ensure patient safety.
Contact the team at Gardner Law to learn more.