US president Donald Trump has directed his administration to begin work on establishing a national artificial intelligence (AI) regulation framework across the country in an attempt to override what he described as cumbersome state-level legal frameworks.
The latest executive order (EO) to emerge from the Oval Office, Ensuring a national policy framework for artificial intelligence, builds on a January 2025 order titled, Removing barriers to American leadership in artificial intelligence, in which Trump lambasted his predecessor, Joe Biden, for allegedly trying to paralyse the industry with regulation.
Trump claimed that since then, his administration has delivered “tremendous benefits” which led to trillions of dollars of investment in AI projects across the US.
In his follow-on EO, Trump said that in order to win, US artificial intelligence companies must be allowed to innovate without excessive regulation, but were being thwarted by “excessive” state-level regulation. He said this creates a patchwork of 50 different regulatory regimes, making compliance much more challenging, especially for startups.
Trump also accused some states of enacting laws that require entities to embed “ideological bias” in AI models, noting a Colorado law that bans algorithmic discrimination. He claimed this may force AI models to produce false results to avoid “a differential treatment or impact on protected groups”.
“My administration must act with … Congress to ensure that there is a minimally burdensome national standard – not 50 discordant state ones,” wrote Trump.
“The resulting framework must forbid State laws that conflict with the policy set forth in this order. That framework should also ensure that children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded. A carefully crafted national framework can ensure that the United States wins the AI race, as we must.”
Task force
On the basis that it is US policy to “sustain and enhance” its global AI dominance through a “minimally burdensome national policy framework”, the order directs US attorney general Pam Bondi to establish an AI Litigation Task Force in the next month to challenge state AI laws the administration deems inconsistent with the EO on various grounds – for example, those that “unconstitutionally regulate interstate commerce”, or those that Bondi simply judges unlawful herself.
The EO further mandates that in 90 days, secretary of commerce Howard Lutnick will, in consultation with various other people, publish an evaluation of existing state AI laws that identifies any conflicting with the wider policy and those that may be referred to the Task Force.
At a minimum, this evaluation is designed to identify any that require AI models to alter truthful outputs or compel developers or deployers to handle information in an unconstitutional fashion – particularly with regard to the First Amendment covering freedom of speech.
The EO makes various other provisions restricting certain federal funding for states with restrictive AI laws – notably related to broadband roll-out, it directs agencies such as the Federal Communications Commission (FCC) and Federal Trade Commission (FTC) to consider national reporting and disclosure standards that could preempt conflicting laws in areas such as truthful outputs, and proposes legislation to create a unified federal AI policy to preempt conflicting state laws, albeit with some exemptions around areas such as child safety, AI compute and datacentre infrastructure, and state procurement and use of AI.
Kevin Kirkwood, chief information security officer at cyber security company Exabeam, said that regardless of Trump’s chosen delivery mechanism, the core idea behind establishing a federal framework to preempt state laws was not necessarily without merit.
“You can’t strong-arm a distributed ecosystem into aligning with a single vision just because you wrote it into an executive order, but let’s not confuse tactics with principle,” he said. “The underlying point is sound: AI regulation should be national in scope, not stitched together from state capitols that don’t even agree on what constitutes an algorithm.
“Artificial intelligence … is a national, and global, infrastructure layer. Allowing 50 states to create inconsistent, siloed laws around how AI can be developed, deployed or audited creates friction, uncertainty and massive compliance overhead. Whether it comes from Congress or an executive order, a unified federal framework is essential for ensuring the US remains competitive, cohesive and capable of setting global norms.”
Acknowledging the argument that federal pre-emption undermines local control, Kirkwood said that when it came to AI, local control would lead to fragmented standards benefiting nobody “except maybe lawyers”.
“California may want aggressive AI safety regulations, but if New York and Florida disagree, developers are left navigating a maze of contradictory rules,” he said. “That kind of regulatory patchwork doesn’t protect people; it paralyses innovation. It’s not hard to imagine a future where startups build for the least regulated state and geo-fence everyone else. That’s a race to the bottom disguised as consumer protection.”
Missing the point?
But Ryan McCurdy, marketing vice-president at database change governance platform Liquibase, said the EO missed the point, even though he conceded federal alignment on AI was a good idea.
“A single rulebook means nothing unless it addresses the baseline problem behind every AI failure: a lack of governance over the data structures that feed these models,” he said. “Model-level rules won’t protect the public if the underlying data is inconsistent, drifting or untraceable.
“So, the real question is whether the national standard will demand evidence,” said McCurdy. “Evidence of how models are trained, evidence of how data evolves, evidence of how organisations prevent unapproved or risky changes. That’s the difference between actual oversight and a press release.
“If the US wants to lead in AI, it needs more than a unified rulebook,” he said. “It needs a standard that forces AI systems to be explainable, governable and accountable from the ground up.”
