Using Personal Data to Train AI? Make Sure You Comply with State Requirements

April 25, 2024

Artificial intelligence (AI) continues its rapid evolution and ascent to prominence and lawmakers in the U.S. are struggling to keep pace, especially at the federal level. While the Federal Trade Commission (FTC) has been openly promising to address consumer protection issues in AI and taking action on perceived deceptive and unfair practices involving personal information (see In the Matter of 1Health.io Inc where a genetic testing company’s alleged retroactive privacy policy changes, failure to obtain consent, lack of adherence to privacy policy, and security failures resulted in FTC action), it’s the state legislatures taking the first steps on AI regulation. Please note: We have already written about the FDA’s developing approach to AI (with more to come) so this article will be focused elsewhere.

Laws on AI are already on the books or under consideration in many states including California, Oklahoma, Florida, Virginia, Vermont, New Jersey, Rhode Island, Connecticut, Massachusetts, among others. This presents challenges for FDA-regulated companies that need to comply. Below we highlight some key takeaways from state legislative activity to-date.

There are three main types of AI regulation emerging, with some state legislatures and regulators drawing from all three:

1. Privacy – Use of Personal Data in AI

Numerous state privacy laws protect individuals from misuse of data, with direct impact to AI implementation both in algorithm training and use of AI systems or tools to process personal information. These laws (e.g., the California Consumer Privacy Act) restrict the use of personal information used to train algorithms and in some cases seek to address “automated decision-making” which may impact individuals (e.g., by limiting the use of personal information fed into these tools or providing opt-out rights). In many states, consent is needed to use personal information (especially health information or other sensitive data) to train AI algorithms. Data protection impact assessments may also be required in many states when data processing presents a high or heightened risk to a consumer, including where personal information is processed in connection with automated decision-making.

These laws directly impact FDA-regulated companies who intend to use personal information to train AI algorithms or use personal information in their AI-powered decision-making. In many cases these will require companies to obtain consent and perform assessments before training and implementing AI tools, especially when personal information about health is involved.

2. Automated Decision-Making Technology

Automated decision-making technology (i.e., artificial intelligence driven tools that make decisions that impact individuals) are the focus of many AI laws and regulations. These tools are often used in health care, as well as employment, credit reporting, insurance and many other areas. One example of a rule addressing automated decision-making is this draft regulation proposed by the California Privacy Protection Agency under the California Consumer Privacy Act. The draft rule defines automated decision-making technology as:

"[...] any system, software, or process – including one derived from machine-learning, statistics, or other data-processing or artificial intelligence – that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decision-making. Automated decision-making technology includes profiling."

Many other state privacy laws have provisions about the use of personal information for automated decision-making, including Colorado’s Privacy Act.

FDA-regulated companies should be aware that use of personal information to either train AI algorithms or as used in any automated decision-making tools may be subject to one or more of these state laws. It is also important to note that even use of non-personal data (e.g., aggregated, de-identified data) may be subject to laws governing automated decision-making technology, particularly as these laws continue to emerge.

3. “AI Bill of Rights”

A handful of state legislatures have proposed laws that would establish an “AI Bill of Rights” for consumers. These laws focus primarily on addressing the risks associated with the commercial use of generative AI—which the FTC has also recognized at the federal level—as well as the other concerns discussed below. Oklahoma’s proposed AI Bill of Rights is one such example which grants, among other rights:

"The right to know when they are interacting with an artificial intelligence engine rather than a real person in an interaction where consequential information is exchanged; [...]
The right to know that any company which includes any of their personally identifiable information in an artificial intelligence model has implemented reasonable security measures for data privacy within the company's industry and conducts regular risk assessments to assess design, operational, and discrimination harm."

While these laws are concerned with the potential deception and harms that come with use of generative AI, they also address general privacy rights similar to some of the automated decision-making and privacy laws discussed above. FDA-regulated companies should take these laws into consideration when developing tools that use AI, particularly tools which are designed to interact with humans.

Wait and See

A number of states have taken a more tentative approach. This state-level AI legislation focuses on creating AI working groups rather than legislating directly on the topic. These bills aim to establish commissions, agencies or other administrative bodies or task forces that will have responsibility for determining how best to address the various consumer, employment, privacy and other risks posed by the commercial use of AI. These states may consider laws already on the books (e.g., addressing privacy, anti-discrimination, consumer protection, cybersecurity, among others) as potentially sufficient to address AI concerns or are waiting for further developments to act more decisively. One example is this proposed Massachusetts bill (Bill S. 2539), which would create a “Cybersecurity Control Board” with powers to formulate, propose, adopt and amend rules and regulations relating to cybersecurity including artificial intelligence.

Questions?

FDA-regulated companies should proactively evaluate their approach to AI, whether it’s assessing use of personal information for AI training, developing and using automated decision-making tools in a lawful manner, crafting policies on the use of generative AI, assessing services purchased from vendors, or ensuring compliant products or services being developed by the company. If you have questions about AI, we invite you to contact us for more information.