Texas AI Law Ups The Ante for Medical AI Use
March 26, 2026As we have documented recently in our alert on new state privacy laws taking effect, new state AI laws that directly impact device and drug manufacturers have also begun to take effect. As of January 1, 2026, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) governs how companies create and use artificial intelligence (AI) systems in Texas. Separately, Texas Health & Safety Code § 183.005 also applies to healthcare providers using these tools by requiring licensed practitioners to review any AI‑generated diagnostic output and maintain final clinical authority. Together, TRAIGA and similar state AI laws add another set of compliance considerations on top of the existing regulatory framework for AI-enabled medical devices.
Who TRAIGA Regulates

TRAIGA applies to developers and deployers of AI systems in Texas. Medical device companies may fit one or both categories when they build AI‑enabled tools and integrate them into clinical or operational uses.
- Developers create, design, train, or program AI systems that enter the Texas market, including AI‑driven medical technologies for providers, labs, or consumers.
- Deployers put AI into use, such as offering, selling, or using it for covered decisions. Drug and device makers may act as deployers when using AI for clinical trials, patient‑facing tools, or R&D decision‑support.
AI Can Assist, But Texas Clinicians Must Decide
While TRAIGA sets obligations for companies, Texas places additional requirements on the clinicians who use these tools. Texas Health & Safety Code § 183.005 requires clinicians to review all AI‑generated diagnostic records under Texas Medical Board standards, ensuring AI remains a decision‑assistance tool rather than a substitute for human judgment.
Consistent with these requirements, AI may flag abnormalities, surface possible diagnoses, or recommend follow‑up, but the final diagnostic or treatment decision stays with a human clinician. The law is designed to permit use of AI to enhance clinical practice but cements the role of health care professionals in clinical decision-making.
FDA Compliance Alone Doesn’t Satisfy Texas Law
While the nuances of how these rules will overlap remain murky, compliance with FDA regulations and guidance alone will not satisfy TRAIGA. Developers and deployers of AI systems in Texas must still meet TRAIGA requirements and applicable Texas consumer‑protection standards, in addition to any applicable FDA requirements.
Wide Net for AI
TRAIGA uses a broad definition of an AI system, covering technologies that use data‑trained machine‑learning models to perform tasks associated with human intelligence, such as computer vision, language processing, or content generation. For drug and device makers, this means many software‑based diagnostic, decision‑support, or predictive tools using machine learning could qualify as an AI system and trigger TRAIGA obligations.
Compliance Activities under TRAIGA
The Texas Attorney General (AG) can request any documents or data needed to investigate potential TRAIGA violations, including but not limited to information relevant to whether a developer or deployer engaged in a prohibited use. This may include system design, training, or output records.
- Prohibited Uses: Subchapter B of TRAIGA bars AI systems designed for harmful or improper uses, including tools meant to manipulate users, violate rights, or generate unlawful content. These provisions focus on preventing high‑risk misuse rather than restricting legitimate clinical‑support applications.
- Compliance Benefits: Companies meeting Subchapter B duties receive some legal protections, such as a rebuttable presumption of reasonable care (H.B. 149 § 551.106(f)). Regulators can still challenge this presumption, but strong documentation provides a meaningful benefit.
Enforcement under TRAIGA and HSC § 183.005
Individuals cannot bring private lawsuits under TRAIGA or HSC § 183.005. These laws vest enforcement authority in the Texas Attorney General (AG).
- Notice and Cure under TRAIGA: If the AG finds a violation, the company has 60 days to fix it before facing enforcement.
- TRAIGA Penalties: $10,000 to $200,000 per violation plus other enforcement action. Depending on the scope of deployment, multiple violations may accrue, greatly increasing aggregate liability.
- HSC § 183.005 Penalties: $5,000 up to $250,000 per violation. Failure to maintain required human review of AI‑generated diagnostic records may also expose practitioners to disciplinary action by their licensing board.
"Companies developing or deploying AI systems in Texas should take note of requirements that aren’t addressed by their FDA regulatory submissions and quality systems.”
Josh Arkulary, Associate Attorney
How Gardner Law Can Help
If you have questions about AI-focused laws or other privacy matters or need experienced counsel to help design, enhance, or implement privacy or AI governance programs, contact Gardner Law. Our attorneys have deep experience advising drug and device manufacturers of all sizes on both commercial and pre-commercial privacy, AI, and cybersecurity matters.