The Trump administration on Friday laid out a legislative framework for a singular policy for AI in the United States. The framework would centralize power in Washington by preempting state AI laws, potentially undercutting the recent surge of efforts from states to regulate the use and development of the technology.
“This framework can only succeed if it is applied uniformly across the United States,” reads a White House statement on the framework. “A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race.”
The framework outlines seven key objectives that prioritize innovation and scaling AI, and proposes a centralized federal approach that would override stricter state-level regulations. It places significant responsibility on parents for issues like child safety, and lays out relatively soft, nonbinding expectations for platform accountability.
For example, it says Congress should require AI companies to implement features that “reduce the risks of sexual exploitation and harm to minors,” but does not lay out any clear, enforceable requirements.
Trump’s framework comes three months after he signed an executive order directing federal agencies to challenge state AI laws. The order gave the Commerce Department 90 days to compile a list of “onerous” state AI laws, potentially risking states’ eligibility for federal funds like broadband grants. The agency has yet to publish that list.
The order also directed the administration to work with Congress on a uniform AI law. That vision is coming into focus, and it mirrors Trump’s earlier AI strategy, which focused less on guardrails and more on promoting companies’ growth.
The new framework proposes a “minimally burdensome national standard,” echoing the administration’s broader push to “remove outdated or unnecessary barriers to innovation” and accelerate AI adoptions across industries. This is a pro-growth, light-touch regulatory approach championed by “accelerationists,” one of whom is White House AI czar and venture capitalist David Sacks.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
While the framework nods to federalism, the carve-outs for states are relatively narrow, preserving only their authority over general laws like fraud and child protection, zoning, and state use of AI. It draws a hard line against states regulating AI development itself, which it says is an “inherently interstate” issue tied to national security and foreign policy.
The framework also seeks to prevent states from “penaliz[ing] AI developers for a third party’s unlawful conduct involving their models” — a key liability shield for developers.
Missing from that framework are any gestures toward liability frameworks, independent oversight, or enforcement mechanisms for potential novel harms caused by AI. In effect, the framework would centralize AI policymaking in Washington while narrowing the space for states to act as early regulators of emerging risks.
Critics say states are the sandboxes of democracy and have been quicker to pass laws around emerging risks. Notably, New York’s RAISE Act and California’s SB-53 seek to ensure large AI companies have and adhere to safety protocols that are publicly documented.
“White House AI czar David Sacks continues to do the bidding of Big Tech at the expense of regular, hardworking Americans,” said Brendan Steinhauser, CEO of The Alliance for Secure AI. “This federal AI framework seeks to prevent states from legislating on AI and provides no path to accountability for AI developers for the harms caused by their products.”
Many in the AI industry are celebrating this direction because it gives them broader liberties to “innovate” without the threat of regulation.
“This framework is exactly what startups have been asking for: a clear national standard so they can build fast and scale,” Teresa Carlson, president of General Catalyst Institute, told TechCrunch. “Founders shouldn’t have to navigate a patchwork of conflicting state AI laws that impede innovation.”
Child safety, copyright, and free speech
The framework was issued at a moment when child safety has emerged as a central flashpoint in the debate over AI. Certain states have moved aggressively to pass laws aimed at protecting minors and placing more responsibility on tech companies. The administration’s proposal points in a different direction, placing greater emphasis on parental control than platform accountability.
“Parents are best equipped to manage their children’s digital environment and upbringing,” the framework reads. “The Administration is calling on Congress to give parents tools to effectively do that, such as account controls to protect their children’s privacy and manage their device use.”
The framework also says the administration “believes” that AI platforms should “implement features to reduce potential sexual exploitation of children and encouragement of self-harm.” While it calls on Congress to require such safeguards and affirms that existing laws, including those banning child sexual abuse materials, should apply to AI systems, the proposal employs qualifiers like “commercially reasonable” and stops short of laying out clear prerequisites.
On the topic of copyright, the framework attempts to find a middle ground between protecting creators and allowing AI systems to be trained on existing works, citing the need for “fair use.” That kind of language mirrors arguments AI companies have made as they face a growing number of copyright lawsuits over their training data.
The main guardrails Trump’s AI framework seem to outline involve ensuring “AI can pursue truth and accuracy without limitation.” Specifically, it focuses on preventing government-driven censorship, rather than platform moderation itself.
“Congress should prevent the United States government from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas,” the framework reads. It also instructs Congress to provide a way for Americans to pursue legal redress against government agencies that seek to censor expression on AI platforms or dictate information provided by an AI platform.
The framework comes as Anthropic is suing the government for allegedly infringing on its First Amendment rights after the Department of Defense (DOD) labeled it a supply-chain risk. Anthropic argues that the DOD is designating it as such in retaliation for not allowing the military to use its AI products for mass surveillance of Americans or for making targeting and firing decisions in autonomous lethal weapons. Trump has referred to Anthropic and its CEO Dario Amodei as “woke” and a “radical leftist.”
The framework’s language, which emphasizes protecting “lawful political expression or dissent,” seems to build on Trump’s earlier executive order targeting “woke AI,” which pushed federal agencies to adopt systems deemed ideologically neutral.
It’s unclear what qualifies as censorship versus standard content moderation, so such language could make it difficult for regulators to coordinate with platforms on issues like misinformation, election interference, or public safety risks.
Samir Jain, vice president of policy at the Center for Democracy and Technology, pointed out: “[The framework] rightly says that the government should not coerce AI companies to ban or alter content based on ‘partisan or ideological agendas,’ yet the Administration’s ‘woke AI’ Executive Order this summer does exactly that.”



