In depth
TRAIGA's requirements
What are TRAIGA's stated goals?
- Facilitate and advance the responsible development and use of AI systems.
- Protect individuals and groups from known and reasonably foreseeable risks associated with AI systems.
- Ensure transparency regarding risks in the development, deployment and use of AI systems.
- Provide reasonable notice concerning the use or intended use of AI systems by state agencies.
To whom would TRAIGA apply?
TRAIGA would apply to an individual or business that promotes, advertises or conducts business in Texas; produces a product or service used by Texas residents; or develops or deploys an AI system in Texas. An "AI system" means any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions and recommendations, which can influence physical or virtual environments.
Restrictions
TRAIGA prohibits a person from developing and deploying AI systems that, among other things:
- Incite or encourage a person to engage in physical self-harm, harm another person, or engage in criminal activity.
- Infringe an individual's constitutional rights.
- Unlawfully discriminate against a protected class, violating state or federal law, with certain exceptions for insurance entities.
- Produce certain sexually explicit content, such as visual materials and deep fake videos or images, including minors, or intentionally develop or distribute an AI system that engages in text-based conversations that simulate or describe sexual conduct while impersonating or imitating a child younger than 18 years of age.
Disclosures
TRAIGA also includes certain disclosure requirements in some circumstances. For example, if an AI system is used in connection with health care service or treatment, the provider of the service or treatment would need to provide a disclosure to the recipient of the service or treatment or the recipient's personal representative before the service or treatment is provided, except in an emergency, in which case the provider must provide the required disclosure as soon as reasonably possible. A person would be required to make the disclosure regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an AI system. The disclosure must be clear and conspicuous, written in plain language, and may not use a dark pattern. A disclosure may be provided by using a hyperlink to direct a consumer to a separate Internet webpage.
Carveout
TRAIGA would provide a limited carveout for innovation and experimentation under its AI regulatory sandbox program. The regulatory sandbox program would allow an approved applicant, without being licensed or registered in Texas, to test an AI system for up to 36 months. The approved applicant would have to provide a detailed description and intended use of the AI system; conduct a benefit assessment that addresses potential impacts on consumers, privacy, and public safety; describe their plan for mitigating any adverse consequences that may occur; and provide proof of compliance with any applicable federal AI laws and regulations. The Texas Department of Information Resources and any applicable agency would review and approve applications.
Defense
TRAIGA would also provide that a defendant is not liable if another person uses the defendant's affiliated AI system in a prohibited manner, or if the defendant discovers a violation through feedback from a developer, deployer, or other individual who believes a violation has occurred; through testing, including adversarial or red-team testing; by following guidelines issued by applicable state agencies; by substantially complying with the most recent version of the AI Risk Management Framework: Generative AI Profile published by the National Institute of Standards and Technology, or another recognized AI risk management framework; or through an internal review process.
Enforcement
The Texas AG would have the exclusive enforcement authority as there is no private right of action. The law also allows for a 60-day cure period. After this, if the individual or company fails to act, the AG could pursue civil penalties ranging from: USD 10,000 to USD 12,000 per violation for curable violations; USD 80,000 to USD 200,000 per violation for uncurable violations; USD 2,000 to USD 40,000 for each day that the violation continues; and an injunction.
What's next
Texas has positioned itself as a leader in AI regulation, having passed TRAIGA and having initiated the first healthcare-related generative AI enforcement action in 2024. The Texas AG has also been very active in enforcing AI-adjacent laws, including data privacy and biometrics regulations. In light of this, technology companies should proactively assess their AI practices. Recommended steps include the following:
- Establish an AI governance framework. Implement a comprehensive governance and risk management framework, including internal policies, procedures, and systems for reviewing AI use, identifying risks and reporting concerns. This is particularly important, as it may provide companies with a valid defense under TRAIGA.
- Conduct vendor and system due diligence. Evaluate AI vendors and systems before engagement or deployment. This includes assessing how they test for, mitigate and remediate algorithmic bias, and ensuring compliance with TRAIGA.
Companies should also assess whether sufficient resources, such as human oversight, user training and budget, are in place to responsibly manage AI systems in compliance with TRAIGA and other applicable state laws.