Texas Passes Most Comprehensive AI Governance Bill
8:30
author photo
By Cam Sivesind
Wed | Jul 2, 2025 | 5:39 AM PDT

Texas is making waves in AI governance.

Governor Greg Abbott recently signed House Bill 149, formally titled the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), on June 22, 2025.

The new law, effective January 1, 2026, establishes clear guardrails around AI development and deployment—regulating who it applies to, what it prohibits, and how oversight will be handled.

Who's covered by the provisions of HB 149? Everyone. TRAIGA applies to anyone who deploys or develops AI systems in Texas; produces AI-powered products/services used by Texans; or markets or advertises AI systems in Texas.

"Artificial intelligence system" is defined broadly to include machine learning, natural language processing, computer vision, and content generation. "Biometric identifiers" such as retina scans or voiceprints are expressly covered.

The law clearly states its intentions: "…to facilitate and advance the responsible development and use of artificial intelligence systems; to protect individuals and groups… from known and reasonably foreseeable risks… to provide transparency… and notice of use by state agencies."

In a LinkedIn post days before the governor signed the bill, Violet Sullivan, AVP, Cyber Solutions Team Leader, at Crum & Forster, broke down what the bill does.

Prohibits:

  • Government use of AI for social scoring

  • Political viewpoint discrimination by AI systems

  • Biometric surveillance using scraped images without consent

  • AI tools designed to incite self-harm or criminal activity

  • AI-generated child exploitation or explicit deepfakes

Requires:

  • Clear disclosures when consumers interact with AI

  • Transparency from developers/deployers

  • A right to appeal certain AI-driven decisions (only if it significantly impacts consumer's health, safety, welfare, or basic rights)

Establishes:

  • A statewide regulatory sandbox for AI innovation

  • The Texas AI Council to monitor use and recommend reforms
    Enforcement authority

  • (Texas Attorney General) and civil penalties up to $200,000

Limitations:

  • Free speech conflict: Limits on "political viewpoint discrimination" could clash with platform rights and federal law.

  • Weak bias standard: Requires INTENT, not just outcome, to prove AI discrimination.

  • Sandbox risks: Looser rules may allow risky AI testing with little oversight.

  • No local control: Blocks cities (Austin, Houston) from setting stronger rules.

  • Only the AG can enforce.

  • Gov-focused bans: Some protections don't apply to private-sector AI. (Critics may argue this misses major commercial threats from AI used in surveillance, retail profiling, or hiring tools.)

Sullivan concludes, "And the big question is... will Texas's bold new AI law go into effect as planned (Jan. 1, 2026)—or get frozen by federal preemption before it starts?"

Texas will allow controlled testing of AI systems through a regulatory sandbox program, granting temporary exemptions for up to 36 months.

The law establishes a 10-member, governor-appointed Artificial Intelligence Council to advise on ethics, public safety, equity, innovation, and future regulations.

Local governments can't draft conflicting AI rules, and private-sector regulation responsibilities were significantly scaled back from the original bill.

"While TRAIGA was recently signed into law in Texas, we have all been watching to see what would happen at the federal level to see what the practical effect of the law would be in light of the proposed federal ban on state-level AI regulation," said Shawn Tuma, Shawn Tuma, Co-Chair, Data Privacy & Cybersecurity Practice, Spencer Fane LLP.

"As of this morning, the U.S. Senate has voted to remove that provision from pending legislation, meaning that (at least for now ) there is are no further concerns to whether the law will go into effect and be enforceable, which is a good thing. In the absence of a comprehensive federal law, which I doubt we will ever see, I believe TRAIGA represents a positive step in AI regulation by balancing innovation with accountability in a way that enables it to be adaptable to rapidly evolving AI technologies."

Tuma added, "By establishing clear guardrails, such as transparency requirements and prohibitions on AI practices that intentionally cause certain harms, where they are clearly warranted, yet not over-regulating in areas that need more development and maturation to better flesh out the issues, Texas is setting strong example for reasonable AI regulation."

So what does this all mean for cybersecurity and AI practitioners? Here are some key considerations.
  • Compliance Obligations: AI systems that include biometrics, content generation, or decision-making for Texans must now comply with disclosure, transparency, and anti-discrimination rules.

  • Ethical & Security Standards: Tools must avoid manipulative outcomes and unfair bias—necessitating rigorous testing, documentation, and adversarial resilience.

  • Governance Roots: Design documentation, model governance, and access controls will grow increasingly critical under sandbox and council supervision.

  • Privacy Focus: Use of biometric data, especially collected without explicit consent, falls under heightened scrutiny.

  • Federal Conflict Risk: If a federal AI moratorium passes (e.g., via Senator Ted Cruz's proposal restricting state AI regulation for 10 years), TRAIGA and similar state laws could be challenged.

"Texas's new AI law is a standout among state regulations because it doesn't just impose restrictions—it also pioneers a first-in-the-nation regulatory sandbox and AI Council to keep innovation flowing within a responsible framework," said Ankit Gupta, Senior Security Engineer, Exeter Finance LLC. "It shows you can put up guardrails for high-risk AI uses without slamming the brakes on innovation. That balance of oversight and autonomy is critical, and Texas is setting an example for how to achieve it at the state level."

Key timeline and next steps:

  • Effective January 1, 2026

  • Organizations should begin:

    • Assessing their AI systems for prohibited uses or bias potential

    • Implementing transparency and human review processes

    • Preparing for possible participation in the Texas sandbox program

    • Watching for Council guidance, which could trigger new technical standards

"For cybersecurity teams, TRAIGA is both an opportunity and a mandate. Enforcement might face headwinds from potential federal preemption, but that uncertainty isn't a green light to ignore compliance," Gupta said. "If anything, it's a cue to double down on AI governance: inventory your AI use cases, align with NIST's AI Risk Management Framework, and be ready to show regulators you've done your due diligence."

States with enacted or pending AI governance laws 

All 50 states, Puerto Rico, U.S. Virgin Islands, and the District of Columbia have introduced AI-related bills; 28 states plus territories have adopted or enacted AI measures this year—more than 75 total measures.

The National Conference of State Legislatures (NCSL) reports at least 45 states introduced AI bills in 2024, with 31 enacting laws or resolutions. Here are some of the leading states.

  1. California

    • Enacted generative-AI transparency laws (AB‑2013, SB‑942), effective Jan. 1, 2026 

    • Advanced proposals on frontier AI (SB 1047, vetoed) 

  2. Colorado

    • Colorado AI Act establishes developer obligations and risk disclosures, taking effect Feb. 1, 2026 

  3. Utah

    • Utah AI Policy Act, focusing on consumer disclosure and AI safety, effective May 1, 2024

  4. Tennessee

    • ELVIS Act (voice/audio deepfake protections), passed Mar 21, 2024, effective July 1, 2024 

  5. Texas

    • TRAIGA (HB 149), effective Jan. 1, 2026

  6. Kentucky

    • SB 4 mandates AI policy standards by the Commonwealth Office of Technology

  7. Maryland

    • HB 956 establishes a working group on AI use and guidance

  8. Montana

    • HB 178 limits government AI use and mandates transparency with qualified personnel review, signed May 5, 2025

  9. Oregon

    • SB 1571 requires AI campaign communications disclosure and addresses deepfakes

Dr. Kimberly KJ Haywood is the Principal CEO at Nomad Cyber Concepts and Adjunct Cybersecurity Professor at Collin College. She had this comprehensive commentary to add about the HB149 news.

"First, I must applaud Governor Abbott and the Legislators of Texas who took a bold leap toward the passage of the Responsible Artificial Intelligence Governance Act (TRAIGA), signaling a strong commitment to both innovation and accountability.

"With the signing of TRAIGA, Texas has taken a commendable step toward responsible and proactive AI oversight, which is perfectly timed, as the entire state emerges as the new Silicon Valley of this era. From San Antonio, with its Technology Port Center expansion in Aerospace, Cybersecurity, AI, and eSports; to Houston's wide range of technologies in Film & Arts, Life Sciences, and Space; to my own town, Dallas–Fort Worth, which is exploding in AI development, Robotics, Healthcare Technology, Cybersecurity, and more. TRAIGA demonstrates the kind of state-level leadership that is urgently needed.

"As I've stated in previous commentaries over the past couple of years, our adversaries aren't waiting to exploit emerging technologies; they're actively developing AI tools with malicious intent. Organizations can't afford to delay action while federal discussions remain stalled. TRAIGA's provisions—ranging from the prohibition of harmful and discriminatory AI uses to the creation of a regulatory sandbox—represent a balanced approach that promotes innovation without compromising public safety. However, as institutions shape evolving frameworks, they must do their due diligence. It's easy to focus regulatory scrutiny on large, visible AI models, but we risk overlooking smaller, decentralized developments that may pose even greater risks. As I've previously stated, it's like constructing a high-rise building with attention to height while ignoring the strength of the foundation; both are essential for long-term stability."

So, who does this impact first?

"Our MSPs and MSSPs, who are currently developing or enhancing their solutions to support clients' needs and pain points, especially those with existing or pending contracts with large-scale organizations. Why? Because MSPs/MSSPs have long been considered a risk, though acceptable. As third-party providers, unfortunately, compliance and risk audits always seem to befall these organizations. What can they do? First, acknowledge that to maintain their client/customer profiles, they must get ahead of the coming wave of AI governance regulations. Sounds like a 'rinse and repeat' of the Cybersecurity Compliance era? Well, it is!

"TRAIGA's inclusion of the Texas Artificial Intelligence Council is a wise move, offering a dedicated body to guide future policymaking and industry collaboration. I encourage other states to examine this model and take similar steps toward responsible AI governance."

Comments