Event Recap: GC Roundtable
December 17, 2025The final session of Navigating What’s Next: AI, Compliance, and Regulation in Life Sciences featured a GC roundtable discussion among seasoned legal and compliance leaders in the medical technology sector. In a thoughtful discussion moderated by Mark Gardner, our panelists explored practical approaches to integrating artificial intelligence into compliance programs, lessons learned from early AI deployments, and emerging enforcement risks under U.S. and international fraud and abuse regimes.
Panelists included:
-
Mike Pisetsky, Chief Business & Legal Affairs Officer, SI-Bone
-
Darci Teobaldi, VP & General Counsel, Recor Medical
Bernie Shay, Principal, Scimus Lex Law
Practical AI Adoption and Compliance Controls
The conversation opened on AI’s role in strengthening compliance functions without undermining control or defensibility. Panelists emphasized governance and cross-functional engagement as foundational to managing AI risk. Legal and compliance leaders described being involved early in AI pilots, including isolated sandbox environments where teams can test innovation while reducing uncontrolled data exposure. Panelists underscored that AI often increases signals for review and documentation, which can expand compliance workload rather than reduce it.
“AI is moving much faster than most internal control frameworks, which makes early legal and compliance involvement essential if companies want to stay ahead of risk rather than react to it.”
Mark Gardner, Founder & Managing Partner
The panelists also highlighted the importance of integrating compliance metrics into enterprise reporting to ensure visibility and accountability for risk trends across operations. Human oversight and documented escalation pathways were repeatedly described as essential, particularly when outputs from AI tools inform decisions that carry regulatory consequences.
Tooling for High-Volume Workflows
Practical use cases focused on operational areas where automation delivers material gains. Expense reimbursement and HCP engagement reporting were highlighted as areas where software tools can provide scale, consistency, and defensibility. Mike Pisetsky described using AppZen to scan receipts and reimbursement requests, identify policy deviations, and support targeted review. Darci Teobaldi discussed platforms for Open Payments and HCP engagement monitoring, noting that software is often a prerequisite for producing accurate board-level reporting with lean teams.
“We could not have done our Open Payments and HCP engagement reporting with the small team we had without software. It would have taken multiple people dedicated full time, and Excel spreadsheets simply do not get you there. The tools allow us to see what is happening and report clearly to the board and management.”
Darci Teobaldi, VP & General Counsel, Recor Medical
Panelists cautioned that automation should align with established internal policies and oversight protocols. Even robust tools require calibration to each company’s risk tolerance and escalation processes.
Fraud and Abuse Risk with AI Decision Support
A significant portion of the discussion focused on how AI can create risk under the Anti-Kickback Statute (AKS) and False Claims Act (FCA) when tools are deployed without adequate controls. Panelists noted that intent remains central to these laws and that AI outputs aimed at maximizing revenue or steering clinical decisions can create exposure if not constrained by policy and human review.
“The Anti-Kickback Statute and the False Claims Act are intent-based statutes. If an AI system is making recommendations and you cannot explain the intent behind those outputs, you are starting from a very difficult place.”
Bernie Shay, Principle, Scimus Lex Law
In reimbursement support and patient access use cases, panelists stressed that AI should not substitute for provider attestation. Tools that parse medical records or flag missing elements can support efficiency, but drafting medical necessity narratives or submitting information on behalf of clinicians crosses into areas of heightened regulatory risk.
Enforcement Outlook and Expectations
Roundtable participants agreed that governmental agencies are increasingly using data analytics and AI to identify patterns of concern in public datasets such as Sunshine Act reporting. Leaders predicted a regulatory environment where companies may be expected to demonstrate not only that they deploy monitoring tools, but also that they validate and defend responses to issues those tools surface.
“What we are finding is that these AI tools generate content and signals that still require human review and oversight. They do not replace compliance functions. In many ways, they increase the need to show how issues were identified, evaluated, and addressed."
Mike Pisetsky, Chief Business & Legal Officer, SI-Bone
Panelists advised embedding validation practices, governance frameworks, and documentation near the outset of AI adoption to ensure audit readiness and withstand enforcement scrutiny as AI-driven monitoring becomes more common.
How Gardner Law Can Help
Gardner Law counsels medical device, pharmaceutical, and digital health companies across the full spectrum of compliance and regulatory risk, including AI governance, healthcare compliance program design, fraud and abuse risk management, advertising and promotion review, reimbursement and patient access support, and enforcement readiness. Our attorneys assist clients with risk assessments, internal controls, policy development, vendor diligence, and documentation practices that support defensibility as technologies, business models, and regulatory expectations evolve.
Contact us to discuss how we can support your organization.