Who Gets to Govern AI? The Constitutional Battle Tearing America Apart
April 22, 2026

The Trump administration is waging a legal and financial war against states trying to regulate artificial intelligence. The states are not backing down. The courts will decide who wins. And the answer will shape American democracy for decades.
There is a constitutional confrontation building inside the United States that most of the world has barely noticed, because it looks, on the surface, like a procedural argument about regulatory jurisdiction. It is not. It is a fight over whether democratic governments at any level retain the meaningful ability to govern artificial intelligence, who bears the cost when AI systems cause harm, and whether the companies building the most consequential technology in human history will ever face binding accountability.
On one side: the Trump administration, which has spent the past year systematically dismantling every mechanism through which AI could be regulated, first at the federal level, then by attacking the states that moved to fill the gap. On the other side: state legislatures, governors, attorneys general, consumer advocates, and the parents of teenagers harmed by AI chatbots, who have produced over a thousand bills across all fifty states and are refusing to stop.
The battlefield is the Constitution itself. And the outcome is genuinely uncertain.
How America Got Here
On January 20, 2025, within hours of returning to office, President Trump revoked Biden's Executive Order 14110 on AI safety, calling it unnecessarily burdensome. That order had required frontier AI developers to conduct pre-release safety evaluations and share results with the federal government before deployment. Its replacement, signed three days later, was titled "Removing Barriers to American Leadership in Artificial Intelligence."
The trajectory was announced in those two titles. The administration's AI policy has a single organizing principle: American AI companies must be allowed to develop and deploy technology as fast as possible, primarily to outcompete China. Regulation, any regulation, is reframed as a barrier to national security. Safety requirements become obstacles to innovation. Consumer protections become competitive handicaps.
For most of 2025, the administration focused on clearing federal-level requirements. But states were not waiting. In 2024, 635 AI-related bills were introduced across 45 states, with 99 enacted into law. In 2025, that number jumped to over 1,200 bills across all 50 states, the first year every single state introduced at least one, with 145 enacted. California passed its Transparency in Frontier AI Act, requiring major AI developers to publish safety testing information. Colorado passed the most comprehensive state AI law in the country, banning algorithmic discrimination in high-risk AI systems. Texas passed its Responsible AI Governance Act. Illinois strengthened protections against AI-driven employment discrimination.
The states were doing what states do when the federal government refuses to act: governing.
The Executive Order That Started a War
On December 11, 2025, Trump signed Executive Order 14365, titled "Ensuring a National Policy Framework for Artificial Intelligence." It is one of the most aggressive assertions of executive power over state governance in recent American history.
The order did five things. First, it established an AI Litigation Task Force within the Department of Justice, operational from January 10, 2026, with the sole responsibility of challenging state AI laws in federal court. Its legal theories are the Dormant Commerce Clause, which limits states from passing laws that impede interstate commerce, and federal preemption, which holds that federal law supersedes conflicting state law. Second, it directed the Secretary of Commerce to publish by March 11, 2026 a comprehensive evaluation identifying state AI laws deemed burdensome and in conflict with federal policy, specifically naming Colorado's AI Act as an example of excessive regulation. Third, it directed the FTC to issue a policy statement classifying certain state-mandated AI bias mitigation requirements as per se deceptive trade practices under federal law, an extraordinary use of consumer protection authority to attack consumer protection laws. Fourth, it instructed the FCC to consider adopting a federal AI disclosure standard that would preempt conflicting state transparency laws. Fifth, and most coercively, it conditioned access to $42 billion in federal broadband infrastructure funding, the BEAD program, on states avoiding what the administration considers onerous AI laws.
The broadband leverage deserves particular attention. The BEAD program was appropriated by Congress to expand high-speed internet access to underserved rural and urban communities across America. Using it as a weapon to compel states to abandon AI consumer protections is a maneuver with profound implications for both tech policy and constitutional law. Several legal experts have already flagged that it may violate the Supreme Court's precedent in NFIB v. Sebelius, which limits the federal government's ability to coercively withhold existing funding to compel policy changes by states.
The order also includes a notable carve-out: it explicitly exempts child safety protections from preemption. This is not altruism. It is a political calculation. Child safety is the one area of AI regulation with genuine bipartisan support, and attempting to preempt it would generate opposition the administration cannot afford. The effect is to signal to state legislators that chatbot safety bills protecting minors are the one lane where they can still legislate without triggering federal challenge. In the first two months of 2026 alone, 78 chatbot-specific safety bills were filed across 27 states, exactly the political logic the carve-out predicted.
The States Fight Back
The states' response has been defiant and multidimensional. It is playing out in legislatures, attorneys general offices, and courtrooms simultaneously.
A bipartisan coalition of 36 state attorneys general sent a formal letter to Congress opposing federal preemption of state AI laws, arguing that the risks AI poses, including scams, deepfakes, elder abuse, and harms to children, are precisely the category of consumer protection that states have historically been trusted to address. Colorado's attorney general has committed to challenging the executive order in court. Nearly two dozen attorneys general separately wrote to the FCC urging it not to issue preemptive AI regulations as directed by the order.
In Congress, legislators introduced the GUARDRAILS Act in March 2026, which would repeal the executive order's national AI framework entirely and block federal efforts to impose a moratorium on state regulation. Democratic members from California, Virginia, Maryland, and Hawaii sponsored the bill, and while its passage in the current Congress is unlikely, it signals the legislative opposition the administration will face if it tries to codify the preemption through statute.
The administration already tried that route and failed spectacularly. Senator Ted Cruz proposed a 10-year moratorium on state AI law enforcement as an amendment to the One Big Beautiful Bill Act in mid-2025. It was defeated 99 to 1. The Senate, regardless of party, was not prepared to tell every state legislature in America that it could not govern AI for a decade. Congress then declined to include similar language in the FY2026 National Defense Authorization Act. Twice rejected by Congress, the administration turned to executive action.
This sequence is legally important. Federal preemption normally flows from congressional legislation, not executive orders. The administration's own legal counsel knows this. Executive Order 14365 explicitly acknowledges it must act with Congress. What the order actually does is not legally preempt state AI laws, it creates a framework of litigation threats, funding coercion, and agency action designed to deter states from passing or enforcing AI regulations even without formal preemption. The goal is a chilling effect: make the cost of regulating AI high enough that states back down before the courts rule.
Legal analysts at Ropes and Gray described this in March 2026 as a steep climb for the DOJ's task force. Without congressional legislation, preemption arguments based solely on an executive order face significant constitutional headwinds. The key battlegrounds will be the Tenth Amendment, which reserves powers to the states, the Dormant Commerce Clause, the limits on federal funding conditions established in NFIB v. Sebelius, and the scope of FTC and FCC regulatory authority over AI.
What Is Actually at Stake
The substance of what states are trying to protect matters as much as the constitutional mechanics.
California's Transparency in Frontier AI Act, which took effect January 1, 2026, requires large AI developers to publish safety testing protocols and results for their most capable models. It is, in essence, asking companies to show their work before deploying systems that could affect millions of people's access to jobs, credit, housing, and healthcare. The administration's position, as embedded in the executive order, is that requiring safety disclosures constitutes compelled speech and may violate the First Amendment. This is the same First Amendment argument the tobacco industry used against requirements to publish health warnings. Courts rejected it then. Whether they will reject it now is an open question.
Colorado's AI Act bans algorithmic discrimination in high-risk AI systems: those used to make decisions about employment, housing, credit, education, and healthcare. It requires developers and deployers to conduct impact assessments, disclose AI use to affected consumers, and take reasonable care to prevent discriminatory outcomes. The executive order specifically cites it as an example of a law that forces AI systems to produce false results in order to avoid differential impact on protected groups. That characterization is contested by civil rights lawyers and the Colorado legislature alike, but it telegraphs the DOJ's likely litigation theory when the task force files its anticipated summer 2026 challenges.
Texas's Responsible AI Governance Act takes a different approach, focusing on disclosure and documented risk management rather than outcome mandates. Illinois extended its Human Rights Act to prohibit AI-driven employment discrimination. Utah imposed transparency requirements on AI chatbot interactions. These laws are not ideologically uniform: they come from red states and blue states, Republican and Democratic governors, responding to constituent harms that their residents are actually experiencing.
On March 20, 2026, the White House released its National Policy Framework for AI, a non-binding document setting out legislative recommendations for Congress. It called for a unified national standard that would preempt state AI laws imposing undue burdens. It also declared that training AI models on copyrighted material does not violate copyright laws, a position that will fuel its own wave of litigation. And it urged Congress to prevent the federal government from coercing tech providers into altering content, a provision aimed at content moderation requirements that many states have imposed on social platforms.
The framework is organized around seven pillars: child protection, AI infrastructure, intellectual property, censorship and free speech, innovation, workforce development, and preemption. The ordering is revealing. Child protection comes first because it is politically necessary. Preemption comes last because it is politically toxic, but it is the operative goal the rest of the framework is designed to achieve.
The Money Behind the Fight
The legislative and constitutional battle is being bankrolled by the same industry whose regulation is at stake, a fact that deserves to be stated plainly.
The tech industry reportedly spent over one billion dollars in 2025 on efforts to prevent states from regulating AI. This includes direct lobbying, legal challenges, think-tank advocacy, and the super PAC campaigns described in detail in our previous report on the AI midterm money war. Leading the Future, backed by OpenAI co-founder Greg Brockman and Andreessen Horowitz, raised $125 million with a central focus on defeating candidates who support state-level AI regulation. The argument, deployed consistently across all these channels, is that a patchwork of 50 different regulatory regimes creates unmanageable compliance costs, particularly for startups.
The compliance cost argument is real and should not be dismissed. A small AI startup genuinely cannot maintain 50 separate legal compliance teams. But the solution the industry is pushing, a federal standard so minimal it imposes no meaningful requirements, would address the compliance burden by eliminating the protections, not by harmonizing them upward.
The counter-argument from states, consumer advocates, and Anthropic-backed Public First Action, which donated $20 million to pro-regulation candidates, is that the absence of federal AI safety requirements and the simultaneous effort to prevent state requirements leaves American consumers without protection precisely at the moment when AI is being deployed most aggressively into the decisions that determine their lives.
What Happens Next
The DOJ's AI Litigation Task Force is expected to begin filing federal legal challenges against state AI laws in summer 2026. Legal experts project those cases will take two to three years to resolve. During that time, companies face the uncertainty of complying with laws that may be struck down or ignoring laws that may survive. The Commerce Department's evaluation of burdensome state laws, due in March 2026, was delayed as of mid-April, adding to the uncertainty.
Colorado's AI Act takes effect on June 30, 2026, and is the most immediate target. California's laws are already in force. Texas's law is operational. Each represents a different legal theory of AI accountability, and each will produce different constitutional challenges when the DOJ task force comes for them.
The deeper trajectory is this. If the administration succeeds in preempting state AI regulation and Congress fails to pass meaningful federal legislation, the United States will have no binding AI accountability framework at any level of government. The EU will have one. China will have one, structured differently and for different purposes, but binding. Every major democracy except the United States will have one.
That is not a regulatory gap. It is a choice. And it is being made right now, in courts, in legislatures, in attorneys general offices, and in the PAC spending decisions of the companies whose systems are being deployed into American life.
The people who will live with the consequences of that choice are not the companies making it.
Sources: Paul Hastings, Ropes and Gray, Baker Botts, White and Case, The Next Web, AI2Work, Clark Hill, King and Spalding, Sidley Austin, Holland and Knight, March to April 2026.