FDA Highlights the Promise of AI in Healthcare

July 16, 2024

On June 17, 2024, the U.S. Food and Drug Administration’s (FDA) Digital Health Center of Excellence (DHCE), published a blog article highlighting the promise of artificial intelligence (AI) in healthcare. In the article, the DHCE acknowledges AI’s potential to, “significantly improve patient care and medical professional satisfaction and accelerate and advance research in medical device development and drug discovery,” while also, “driv[ing] operational efficiency by enabling personalized treatments and streamlining health care processes.”

The publication of DHCE’s blog article followed the FDA’s release of its guiding principles for transparency regarding the use of AI in healthcare, and specifically regarding machine learning-enabled medical devices. The DHCE’s article echoes the Agency’s guiding principles in stating that ensuring accuracy, reliability and equity should be among the top priorities for companies bringing AI into the healthcare field.

FDA: Transparency is Essential for Bringing AI Into Healthcare

Three years ago, the FDA was part of an international working group that identified 10 guiding principles for good machine learning practice (GMLP). The FDA’s latest release emphasizes the importance of transparency, building upon the GMLP, including specifically principles seven and nine:

  • GMLP 7: “Focus is placed on the performance of the human-AI team.”

  • GMLP 9: “Users are provided clear, essential information.”

In its guiding principles, the FDA defines “transparency” as, “the degree to which appropriate information about [machine learning-enabled medical devices] . . . is clearly communicated to relevant audiences.” It goes on to list four key elements of effective transparency:

  • “Ensur[ing] that information that could impact risks and patient outcomes is communicated.”

  • “Consider[ing] the information that the intended user or audience needs and the context in which it’s used.”

  • “Us[ing] the best media, timing and strategies for successful communication.”

  • “Rel[ying] on a holistic understanding of users, environments and workflows.”

The DHCE’s blog article identifies transparency as one of several core concepts that it intends to address in greater detail going forward. Others include best practices, operational tools, quality assurance, accountability and risk management related to the use of AI models in healthcare. While highlighting the FDA’s progress to date in addressing the unprecedented challenges associated with AI in healthcare, the DHCE also acknowledges that there is still significant work to be done.

For companies and providers operating at the intersection of AI and healthcare, this means that effectively managing FDA regulatory compliance presents unique challenges as well. As innovation continues to outpace regulation, manufacturers and providers must ensure that they are making informed decisions about compliance—and this is often easier said than done. While the GMLP and the FDA’s more-recent guidance provide a certain amount of insight on issues like transparency, manufacturers and providers are ultimately responsible for ensuring that they take the necessary steps to protect patients and themselves.

Contact the Regulatory Team at Gardner Law

If you have questions about the FDA’s regulation of machine learning-enabled medical devices or any other aspect of AI in healthcare, we invite you to get in touch. Call 651-430-7150 or contact us online to schedule an appointment with Gardner Law today.