United States: Lone Star State sends Responsible Artificial Intelligence Governance Act to governor for signature

In brief

On June 2, 2025, the Texas Legislature passed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA); it is now awaiting the governor's signature. If enacted as written, TRAIGA will regulate the use of artificial intelligence (AI) systems for all individuals and businesses promoting, advertising, or conducting business in Texas, and those developing or deploying AI systems in the state. Additionally, TRAIGA would establish restrictions for developers, deployers and distributors of AI, and civil penalties for infractions. The Texas Attorney General would have sole enforcement authority under TRAIGA, but state agencies are permitted to impose sanctions under specific circumstances. TRAIGA is expected to be signed by the governor and take effect on January 1, 2026.


In depth

TRAIGA's requirements

What are TRAIGA's stated goals?

  1. Facilitate and advance the responsible development and use of AI systems.
  2. Protect individuals and groups from known and reasonably foreseeable risks associated with AI systems.
  3. Ensure transparency regarding risks in the development, deployment and use of AI systems.
  4. Provide reasonable notice concerning the use or intended use of AI systems by state agencies.

To whom would TRAIGA apply?

TRAIGA would apply to an individual or business that promotes, advertises or conducts business in Texas; produces a product or service used by Texas residents; or develops or deploys an AI system in Texas. An "AI system" means any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions and recommendations, which can influence physical or virtual environments.

Restrictions

TRAIGA prohibits a person from developing and deploying AI systems that, among other things:

  1. Incite or encourage a person to engage in physical self-harm, harm another person, or engage in criminal activity.
  2. Infringe an individual's constitutional rights.
  3. Unlawfully discriminate against a protected class, violating state or federal law, with certain exceptions for insurance entities.
  4. Produce certain sexually explicit content, such as visual materials and deep fake videos or images, including minors, or intentionally develop or distribute an AI system that engages in text-based conversations that simulate or describe sexual conduct while impersonating or imitating a child younger than 18 years of age.

Disclosures

TRAIGA also includes certain disclosure requirements in some circumstances. For example, if an AI system is used in connection with health care service or treatment, the provider of the service or treatment would need to provide a disclosure to the recipient of the service or treatment or the recipient's personal representative before the service or treatment is provided, except in an emergency, in which case the provider must provide the required disclosure as soon as reasonably possible. A person would be required to make the disclosure regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an AI system. The disclosure must be clear and conspicuous, written in plain language, and may not use a dark pattern. A disclosure may be provided by using a hyperlink to direct a consumer to a separate Internet webpage.

Carveout

TRAIGA would provide a limited carveout for innovation and experimentation under its AI regulatory sandbox program. The regulatory sandbox program would allow an approved applicant, without being licensed or registered in Texas, to test an AI system for up to 36 months. The approved applicant would have to provide a detailed description and intended use of the AI system; conduct a benefit assessment that addresses potential impacts on consumers, privacy, and public safety; describe their plan for mitigating any adverse consequences that may occur; and provide proof of compliance with any applicable federal AI laws and regulations. The Texas Department of Information Resources and any applicable agency would review and approve applications.

Defense

TRAIGA would also provide that a defendant is not liable if another person uses the defendant's affiliated AI system in a prohibited manner, or if the defendant discovers a violation through feedback from a developer, deployer, or other individual who believes a violation has occurred; through testing, including adversarial or red-team testing; by following guidelines issued by applicable state agencies; by substantially complying with the most recent version of the AI Risk Management Framework: Generative AI Profile published by the National Institute of Standards and Technology, or another recognized AI risk management framework; or through an internal review process.

Enforcement

The Texas AG would have the exclusive enforcement authority as there is no private right of action. The law also allows for a 60-day cure period. After this, if the individual or company fails to act, the AG could pursue civil penalties ranging from: USD 10,000 to USD 12,000 per violation for curable violations; USD 80,000 to USD 200,000 per violation for uncurable violations; USD 2,000 to USD 40,000 for each day that the violation continues; and an injunction.

What's next

Texas has positioned itself as a leader in AI regulation, having passed TRAIGA and having initiated the first healthcare-related generative AI enforcement action in 2024. The Texas AG has also been very active in enforcing AI-adjacent laws, including data privacy and biometrics regulations. In light of this, technology companies should proactively assess their AI practices. Recommended steps include the following:

  1. Establish an AI governance framework. Implement a comprehensive governance and risk management framework, including internal policies, procedures, and systems for reviewing AI use, identifying risks and reporting concerns. This is particularly important, as it may provide companies with a valid defense under TRAIGA.
  2. Conduct vendor and system due diligence. Evaluate AI vendors and systems before engagement or deployment. This includes assessing how they test for, mitigate and remediate algorithmic bias, and ensuring compliance with TRAIGA.

Companies should also assess whether sufficient resources, such as human oversight, user training and budget, are in place to responsibly manage AI systems in compliance with TRAIGA and other applicable state laws.


Copyright © 2025 Baker & McKenzie. All rights reserved. Ownership: This documentation and content (Content) is a proprietary resource owned exclusively by Baker McKenzie (meaning Baker & McKenzie International and its member firms). The Content is protected under international copyright conventions. Use of this Content does not of itself create a contractual relationship, nor any attorney/client relationship, between Baker McKenzie and any person. Non-reliance and exclusion: All Content is for informational purposes only and may not reflect the most current legal and regulatory developments. All summaries of the laws, regulations and practice are subject to change. The Content is not offered as legal or professional advice for any specific matter. It is not intended to be a substitute for reference to (and compliance with) the detailed provisions of applicable laws, rules, regulations or forms. Legal advice should always be sought before taking any action or refraining from taking any action based on any Content. Baker McKenzie and the editors and the contributing authors do not guarantee the accuracy of the Content and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the Content. The Content may contain links to external websites and external websites may link to the Content. Baker McKenzie is not responsible for the content or operation of any such external sites and disclaims all liability, howsoever occurring, in respect of the content or operation of any such external websites. Attorney Advertising: This Content may qualify as “Attorney Advertising” requiring notice in some jurisdictions. To the extent that this Content may qualify as Attorney Advertising, PRIOR RESULTS DO NOT GUARANTEE A SIMILAR OUTCOME. Reproduction: Reproduction of reasonable portions of the Content is permitted provided that (i) such reproductions are made available free of charge and for non-commercial purposes, (ii) such reproductions are properly attributed to Baker McKenzie, (iii) the portion of the Content being reproduced is not altered or made available in a manner that modifies the Content or presents the Content being reproduced in a false light and (iv) notice is made to the disclaimers included on the Content. The permission to re-copy does not allow for incorporation of any substantial portion of the Content in any work or publication, whether in hard copy, electronic or any other form or for commercial purposes.