Top Biden adviser on AI and authoritarianism

America post Staff
16 Min Read



Camps are finally emerging in the big fight over whether and how to regulate AI. President Donald Trump earlier this week declared that he would block local officials who try to regulate the technology; according to a draft executive order leaked on Wednesday, the administration will punish states that try. State lawmakers and members of Congress—including Georgia Republican Rep. Marjorie Taylor Greene—are now pushing back. 

This has been a long time coming. Members of Congress have put out myriad proposals for regulating artificial intelligence, but no significant legislative package has come through. The Biden administration issued a major executive order on the technology, but the Trump administration has spent significant capital attacking it, ultimately rescinding much of the measure. 

“The federal government has not taken even the minimal actions despite quite broad bipartisan support, for example, about managing the risks and harms to kids. If there’s one thing we can all agree on, that’s it,” Arati Prabhakar, former director of the Defense Advanced Research Projects Agency (DARPA) during the Obama administration and head of the Office of Technology and Science Policy during the Biden administration, tells Fast Company. “To say that the states shouldn’t do anything because the federal government should do it—and then yet to oppose every action at the federal level—just makes no sense whatsoever.”

Fast Company senior writer Rebeccah Heilweil spoke with Prabhakar—who has also filed a major brief defending Congress’s ability to support science research amid federal funding squeezes—about where we stand with AI regulation today, and what the technology’s continuing rise could mean for the future of American democracy, governance, and well-being. This interview has been edited for clarity and length. 

The administration has made clear that it doesn’t think there should be state-level AI regulation, and is continuing to route this toward the federal government to regulate. That’s obviously in the interest of some AI companies. What do you think about that?

States have been very active. Every state has considered, often, multiple bills. Yet, when you look in aggregate, most of what’s been enacted are transparency measures. That’s a start, but it’s a pretty small start.

I think we’re very far from wrangling this technology and putting it on the right course. Pretending that the federal government is going to achieve that without the states is ludicrous. 

The Trump administration rescinded the big Biden executive order on AI. What’s been the impact of that? (Editor’s note: The Biden executive order on AI, which was signed in October 2023, gave federal agencies a range of new responsibilities related to the tech, as well as guidance on how to use it.)

The actions that this administration has taken on many fronts are deeply concerning. They’ve put the country into a national crisis. The AI front is one in which it hasn’t been as dramatic. It’s positioned as this big, dramatic shift, but a lot of the implementation of the executive order under President Biden had already happened. I’ve even seen cases where they’re taking credit for things that departments and agencies were doing better because of their good use of AI.

The bigger issue really is that this administration is not stepping up to the two things we need to be doing as a country to get AI fully on the right track. The market is doing all the experimentation to figure out where the business productivity applications are, but there are two public roles that aren’t really being addressed right now in this administration. One is managing risks and harms, and the other is just actively going after AI for public purposes.

That’s where we are falling short. In a time when the most powerful technology of our time is just surging, this government is not stepping up.

How concerned are you about people developing highly psychological—even highly romantic or even sexual—relationships with chatbots?

To me, it’s part of this distortion of reality that started in the social media era—which, by the way, was AI as well, right? It was AI behind the scenes that determined what was being fed to you. Now it’s being exacerbated by AI that’s right in your face with chatbots or image generators. I think it’s very concerning.

It’s a whole spectrum—from the polarization that has been driven by mis- and disinformation, all the way to these parasocial relationships. There have been some really tragic cases, even suicides that were the result of a dialogue that sent someone who was in a really dangerous, fragile state to a terrible end.

AI evokes conversations about cognitive offloading. We often cite the calculator, where, yeah, we’re not as good as doing math in our heads. But in general, automating calculating has been a net good for our overall intelligence. But a lot of people are freaked out by the prospect of outsourcing thinking to these platforms. 

I think about the calculator example a lot. There’s a difference between relying on a calculator to do calculations—which all of us do—and not understanding what a fraction means. You need to understand what a fraction means to just deal with the world. I think that’s the sorting out that needs to happen with large language models. 

I saw Gallup did some polling where they included talking to students about their attitudes about AI. I was really surprised to find out how anxious high schoolers, for example, are about AI. Part of their anxiety is a lack of clarity about when they can and can’t use it in school. But part of their anxiety is also their concern about their critical thinking skills. I love the fact that they had good enough critical thinking skills to be worried about that. 

Is there a risk that focusing too much on the AI race with China is going to prevent us from coming up with better regulations for the technology domestically in the United States?

That argument is being used to avoid regulation. But I think we need to be really clear that what’s happening right now is that every country around the world is racing to use AI as a tool to build a future that reflects their values.

I do not want to live in a future defined by this Chinese authoritarian government’s values. If you look at their human rights abuses, the way they have used AI to create a deep surveillance state . . . if you look at their military aggression and the potential for using AI in aggressive ways in the military context . . . that’s not a world that I think most people want to live in. 

It’s certainly not one that reflects long-held American values. Of course, it’s very concerning that we see some of those tactics being adopted here by our Department of Homeland Security. That’s a huge red flag about what’s happening with this authoritarian push in our government. 

But, again, the core question is: How do we bring AI to life to serve people and to build the kind of future that reflects the values we have—centered on people and their creativity and our ability to chart a course for ourselves, rather than letting that be driven by a king or a dictator? That’s what I want to be using AI for.

It strikes me that the Biden administration and the Trump administration both at least said they really care about government use of artificial intelligence. But at the same time, you’re saying there are concerns about that being used by the federal government to inch more toward authoritarian approaches. 

It’s all about how you use it. In the Biden administration, the Department of Homeland Security rolled up its sleeves and did the work, for example, to use facial recognition at TSA PreCheck or for Global Entry. These are places where there’s a very narrowly defined function, and you’re comparing a fresh camera image with a database that you have a legitimate reason to have. And if you’ve gone through TSA PreCheck or Global Entry, you can see how that has sped up and made those processes much better by using technology appropriately and respectfully. 

This is in stark contrast to the horror stories of police forces around the country who were using off-the-shelf facial recognition technology that purported to make matches from grainy video, for example, in a convenience store that had been held up. Really poor, completely inappropriate use of flawed facial recognition technology led to wrongful arrests of Black men—in one case for a crime committed in a state that this man had never stepped foot in. That’s completely unacceptable.

So the difference between using these technologies wisely and appropriately and with respect for our core values, and then just using it flagrantly without really thinking through what it means for the society that we want to live in—that’s all the difference in the world.

I’m wondering what you make of the rise of firms like Anduril and Palantir that are really interested in selling AI and automated platforms for use on the battlefield and for defense purposes. How should we be thinking about that? 

I want to broaden your question to say it’s not just on a battlefield. These are technologies that are being deployed against Americans here at home. So it’s an incredibly important question. And the core issues are: Do we have democratic control over how the technology is used? These technologies, again, if misused, can violate Americans’ privacy in dangerous and horrific ways. 

We’re seeing that right now with some of the things that are happening. And that’s just unacceptable. And the companies tend to take the position of “I’m just providing the technology.” But the implementations that they are doing are contributing to this really dangerous misuse. That’s one example of a loss of democratic control over these very powerful new capabilities. 

We hear a lot about the AI race. I think about the space race. There was the race to get someone into space. Then there was the race to get someone into orbit. And then there was the race to get someone to the moon. And now it’s to have people live on the moon. When will the AI race be over? When we say we need to be first in the AI race, I’m wondering: First to what?

That is the whole ball game—first to what?

What I keep thinking about, and what I really think we have to get focused on, is what AI can do for the things that fundamentally change people’s lives. We ran a conference called “AI Aspirations” in 2024, when I was still at the White House, and we highlighted seven different huge ambitions for AI. They ranged from closing educational gaps for our kids to getting better drugs faster, to better weather forecasts, to new materials for the advanced generations of semiconductor technology, to changing transportation infrastructure, to making it much more safe. 

Right now, the conversation about AI is really just about LLMs and maybe image generators. But what we’re talking about is the more general power of training AI models on very different kinds of data. We live in such a data-rich world, so it’s not just language. It’s sensor data, scientific data, it’s administrative data, financial data. It’s already every bit of data you generate when you’re clicking or navigating around on the web.

The other key point to me is that it won’t simply happen by companies commercializing products. There’s deep research that’s required. There are datasets that are required to build the weather models or the transportation models that we need. Those are public responsibilities. Ultimately, we need regulatory advances so that we don’t just invent things faster, but our regulatory process can sort out what is safe and effective—for example, for drugs. 

We’re at a point where this powerful technology is breaking loose. There’s no more important time for our federal government to be stepping up. And instead, it’s pulling back from so many other things that will determine who really succeeds at AI.

The final deadline for Fast Company’s World Changing Ideas Awards is Friday, December 12, at 11:59 p.m. PT. Apply today.



Source link

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *