
The Trump administration on Friday outlined to Congress how it wants lawmakers to regulate AI. It is urging Congress to preempt states from passing their own AI laws, while offering guidance on how a broader federal framework could address state-level concerns without overburdening the industry.
Writing on X, White House “AI czar” David Sacks said the administration is responding to what it sees as a fragmented landscape of state-level rules, warning that a “patchwork” of regulations could slow innovation and undermine U.S. competitiveness in AI. But getting Congress to agree on sweeping AI legislation in an election year is a tall order, particularly as the industry’s massive data center buildout has become a flashpoint for lawmakers on both sides of the aisle.
Fast Company spoke with Mina Narayanan, an AI safety and governance research analyst at Georgetown University’s Center for Security and Emerging Technology, about the details of the White House’s framework and its potential implications.
What are the main legislative goals in the White House’s framework?
This national policy framework covers a number of different topics, including child safety, establishing age verification protections for children, and giving parents more control over how minors are using AI technology. There are other pillars of this framework that cover strengthening American communities, requiring data center owners and operators to offset energy rate increases from the construction of data centers, and ensuring that agencies within the government have the technical capacity to understand foundational model capabilities and any national security considerations.
There are a number of other provisions around intellectual property rights, preventing the unauthorized digital replicas of individuals or artists, and preventing censorship–preventing the U.S. government from coercing technology providers to change the outputs of AI systems to adhere to certain ideologies or partisan agendas. And then there are a number of other recommendations around making federal data sets accessible to industry, establishing regulatory sandboxes, and conducting studies of the impacts of AI systems on the American workforce.
What can you tell me about the state pre-emption the White House wants?
This is an iteration of an idea that has been circulating for quite some time. There were at least two previous attempts to codify a moratorium on state AI laws in different bills, all of which failed. [Ted Cruz tried to insert the preemption into the so-called Big Beautiful Bill and the Senate decisively rejected it.]
The administration, or President Trump, did sign an executive order in December that sought to challenge the constitutionality of many state AI laws and would direct federal agencies to determine ways that they could withhold federal funding from states with onerous regulatory regimes, among other recommendations. So this is not a new idea or anything particularly surprising.
It seems to me that the child protection aspects of this are popular and politically palatable. But then you have the state preemptions in the same document, which are controversial. Are they now trying to package a bitter pill with a less bitter pill?
It’s possible that the administration is trying to make this federal preemption clause more palatable to critics of a moratorium on state laws or critics of the federal government intervening with states’ ability to govern this technology. By pairing the federal preemption content with child safety and other topics that I think it’s fair to say have broad bipartisan support, it could be a strategy on the part of the administration to actually codify some federal preemption language and extend an olive branch to critics of preemption.
I will say, you asked about the preemption pillar specifically–some of the areas that are called out in this section seem quite broad and sweeping. The framework specifies that states should not be permitted to regulate AI development. They should not burden Americans’ use of AI for activity that would be lawful if performed without AI. And states should not be permitted to penalize AI developers for third-party conduct involving their models. It’s unclear to me whether these recommendations would, for instance, prevent states from passing laws around requiring developers to publish their safety protocols or their evaluation practices, or even pass laws around the use of AI systems in certain sensitive areas like hiring and employment and healthcare. To me, it’s possible that those laws could very well be preempted by some of this language.
But I do want to say that this is just a framework. Congress would actually need to pass legislation to codify this into law, to implement it. And so it remains to be seen exactly how these still somewhat high-level recommendations will be translated into law.
So I guess it would be possible for a group of Congresspeople to get together and pick out the parts that they think make sense and leave out the parts that they don’t like?
It’s possible. Senator Blackburn released a discussion draft on a bill this week that’s titled Trump America AI Act. There is some overlap between that discussion draft and the White House framework, and there are some pretty significant recommendations in Blackburn’s draft that aren’t present in the framework, and vice versa. So I think you’re absolutely right. It’s possible that this is a starting point and it provides a launchpad for people in Congress to start discussing which of these recommendations they’d like to carry forward.
The survey data shows that a large majority of Americans are in favor of regulating things like the use of AI to create bioweapons or cyberweapons. My impression is that many states are not waiting for a federal law for this kind of thing.
Some states have been quite proactive about enacting some of these ideas into law. California is a standout case. California enacted, I believe, 18 AI-related laws in 2024 that are somewhat narrowly scoped, but it’s a clear signal that California is willing to weigh in on some of these issues and isn’t waiting for Congress to pass a national AI framework.
You mentioned earlier that the Trump administration wanted federal agencies to see if there were things they could do to punish states that are passing AI laws. Have they been successful at that? Are they yet applying pressure to states in that way, like withholding funding?
The December 2025 executive order directed the Department of Commerce or an entity within the Department of Commerce [the National Telecommunications and Information Administration] to withhold BEAD funding–broadband funding–from states with onerous regulatory regimes, or at least explore conditions under which they could implement those restrictions. It also encouraged federal government agencies at large to explore ways of restricting federal funding to states based on their AI laws. But I’m unsure what progress has been made to date there.
If, in some world, this framework was adopted in full in Congress and passed, would you feel better about our AI safety condition in the U.S., or worse?
It’s tough to say, because the framework itself is still somewhat high level. I think the devil is really in the details with what recommendations Congress would choose to codify. I certainly think some of the recommendations regarding child safety are a good idea. Other recommendations, like providing resources to small businesses–grants, tax incentives to support wider deployment of AI tools–are also probably a good idea, because smaller firms are operating on a fairly uneven playing field when it comes to AI. But I do think preempting state AI laws when many of those laws are filling important gaps that Congress has not yet addressed may be unwise in the long term.



