
President Trump just signed an executive order attempting to block states from regulating AI an unprecedented step that would strip states of the ability to protect their residents at a moment of extraordinary technological volatility. This move is overwhelmingly unpopular (polling has found that Americans oppose AI moratoriums by a 3-1 margin), and certain to be litigated in the courts. But it is also likely to achieve the exact opposite of its stated goals—deepening mistrust and slowing AI adoption at a time when America wants to win the global AI race.
We know because we’ve been here before. America has seeded many technological revolutions over the years, from electricity to automation to the internet. And in each of them we see a clear pattern: State-led regulation doesn’t slow growth. It spurs it.
If President Trump sincerely wants America to lead in the AI race, he should look to our nation’s past. Technologies that defined American leadership became safer, more trusted, and more widely adopted because states helped set guardrails—not because Washington preempted them.
Regulation paves the way
When Henry Ford introduced the Model T in 1908, carmakers prioritized speed and sales over safety. Predictably, fatalities soared—over 33 deaths per 10,000 vehicles in 1913, compared to just 1.6 per 10,000 today. But then commonsense regulation met the moment: California launched its DMV, which became the mechanism for identifying and tracking both cars and drivers (1915), Massachusetts required auto insurance (1927), and by the mid-1930s, 24 states mandated drivers’ licenses.
These rules did not deter innovation; they made it safer and more sustainable. Innovations like seat belts (1949) and airbags (standardized in the late 1980s), and taillights (by the 1930s, two taillights became standard in the United States) dramatically reduced fatalities, catalyzing safer, more trusted, and universally-used automotive technology.
And in fact, the American auto industry flourished. By 1950, U.S. automakers produced more than three-quarters of all cars in the world, and General Motors remained the world’s largest automaker from 1931 to 2008. Safe, reliable cars didn’t just replace existing modes of transportation, they made new things possible: lower-cost interstate trucking, suburbs, mobile economies, and a booming manufacturing revolution. Clear rules of the road applied to anyone who sold a car in the U.S., whether made at home or in Europe, Asia, or elsewhere.
In short, automakers dominated from Detroit to overseas markets because regulation provided predictability for investors, confidence for consumers, and pressure for safer, smarter innovation.
Now, the frontier is digital
We’ve experienced over 50 years of disruption and advancement in digital technology, yet foundational guardrails remain almost entirely absent. In this vacuum, tech companies have optimized for max engagement, not ethics—fueling a youth mental health crisis and dramatically eroding our information ecosystem by prioritizing conflict over truth. Startups, wary of reputational and legal risks, and deep-pocketed incumbents like Meta, are retreating into safer B2B offerings instead of consumer-facing breakthroughs. Investors are navigating uncertainty, making bets on products that could be banned or devalued dramatically overnight at the mercy of an individual judge’s ruling who may know little about technology.
As we accelerate into the AI era at warp speed, we are doing so with a set of digital-era guardrails that are outdated, piecemeal, and in most cases, nonexistent by design.
Where we’re going, we still need roads
Just as automobile regulations guided innovation toward safety and scale, AI needs a parallel set of protections.
Cars have mandatory seat belts and airbags; AI systems should have safety standards and harm-mitigation features. Cars have child car seat tethers and safety locks; AI should include comparable safeguards for vulnerable users. Just as vehicles must undergo crash tests, major AI models should be subject to basic auditing before deployment. And just as cars require insurance to manage and price risk, AI liability should be clarified, distributed, and broadly understood.
Just as critical, state-level leadership should be welcomed and followed. Local experimentation builds the practical frameworks that federal law can later scale, and is as essential now as it was in the 1920s.
And the market itself is already signaling the need for this transparency. As Anthropic president Daneila Amodei recently put it, “No one says, ‘We want a less safe product’.” He likened the company’s disclosure of model failures to an automaker releasing footage of a crash-test dummy flying through a windshield. The visual is jarring—but when the result is better airbags and stronger frames, consumers trust the car more, not less. That dynamic builds markets and confidence and it makes innovation self-reinforcing.
The choice is not between growth and guardrails. It’s whether America will lead on AI and govern with the predictability and clarity that fuels investment, trust, and adoption—or whether we will gamble on laissez-faire promises that history tells us never deliver.
If our goal is truly pro-growth AI, then state-led, commonsense regulation is not a roadblock. It’s the on-ramp.
The extended deadline for Fast Company’s World Changing Ideas Awards is Friday, December 19, at 11:59 p.m. PT. Apply today.



