Event Recap: AI in Life Sciences Panel Discussion
November 17, 2025Understanding AI Definitions Across Jurisdictions
The opening session of Navigating What’s Next: AI, Compliance, and Regulation in Life Sciences examined how artificial intelligence is defined and regulated across major global markets. The discussion, moderated by Nathan Downing, Managing Attorney at Gardner Law, featured Paul Rothermel, Senior Attorney at Gardner Law, and Felicity “Flick” Fisher and Oliver Süme, Partners at Fieldfisher.
Panelists began by exploring how the United States, United Kingdom, and European Union have each taken distinct approaches to AI oversight. Süme outlined how the E.U.’s forthcoming AI Act establishes a comprehensive, risk-based framework that classifies AI systems by risk level. High-risk systems, such as those incorporated into medical devices, will require conformity assessments, technical documentation, and postmarket monitoring aligned with the Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR).
Fisher described the U.K.’s contrasting approach, which focuses on high-level principles rather than prescriptive obligations. The U.K. government has empowered the Medicines and Healthcare products Regulatory Agency (MHRA) to guide industry implementation through sector-specific rules. This approach, she explained, aims to balance innovation with patient safety while avoiding duplication of the E.U.’s regulatory model.
Rothermel addressed the fragmented U.S. landscape, where AI governance remains distributed across state and sectoral laws. California’s automated decision-making regulations under the CCPA and CPRA, along with Colorado’s AI Act, are shaping expectations for risk assessments, transparency, and consumer notice. Without a unified federal law, developers must interpret and integrate obligations from privacy, consumer protection, and medical product frameworks.
“Without federal legislation on the horizon, U.S. AI regulation is likely to fall to the states and remain a complex patchwork, making strong AI governance all the more critical.” – Paul Rothermel, Senior Attorney
Risk-Based Regulation and Medical Device Integration
Panelists emphasized that AI regulation is increasingly converging with existing quality and safety standards in the life sciences sector. Süme explained that under the AI Act, medical devices containing AI functions will automatically be treated as high-risk systems. Manufacturers must ensure that AI documentation, data governance, and human oversight processes meet both AI Act and MDR requirements.
Fisher added that compliance readiness will require cross-functional coordination among regulatory, data science, and quality teams. “AI systems evolve continuously,” she noted, “so conformity assessments and postmarket monitoring must evolve too.” She encouraged companies to establish internal AI governance programs early, particularly those that plan to integrate adaptive algorithms into diagnostic or therapeutic applications.
Downing highlighted that the U.S. Food and Drug Administration (FDA) has adopted a similar lifecycle mindset. The agency’s Artificial Intelligence and Machine Learning (AI/ML) Software as a Medical Device (SaMD) Action Plan promotes transparency, human factors validation, and ongoing learning through real-world performance monitoring. Rothermel added that FDA’s collaboration within the International Medical Device Regulators Forum (IMDRF) is a critical step toward aligning expectations across jurisdictions.
Preparing for Compliance Integration
As the discussion concluded, Downing underscored the importance of harmonized internal processes that connect AI innovation with compliance controls:
“Regulators are moving quickly, but companies that build governance into design will always stay ahead.”
Nathan Downing, Managing Attorney
Panelists agreed that success in AI implementation requires an integrated approach—combining legal, regulatory, and technical expertise to manage risk while enabling innovation. They also stressed the growing importance of documentation, transparency, and data traceability as common pillars across all jurisdictions.
How Gardner Law Can Help
Gardner Law advises medical device, pharmaceutical, and digital health companies on privacy, artificial intelligence, and other data compliance and governance matters in addition to counseling clients on AI-enabled FDA-regulated products. Our attorneys assist clients in interpreting U.S. and global AI frameworks, implementing governance programs, and ensuring compliance with applicable laws domestically and internationally requirements. Reach out to us to get started with a tailored assessment of your AI readiness and a roadmap for meeting emerging regulatory expectations.