Sweeney is not off base. Just last year, U.S. federal judges found Google guilty of operating not one but two illegal monopolies—one in adtech and one in online search. The decisions came six years after Meta, then Facebook, was forced to cough up $5 billion to the FTC and modify its business practices after it was found to have mishandled reams of user data in the now infamous Cambridge Analytica scandal.
“Technology just ignores [laws] and rewrites them. That’s true in social media. It’s true in AI as well,” Sweeney said to Schmidt. “We already have laws [that] address issues of bias, consumer protection, and so forth. None of those are enforced online.”
Sweeney suggested that mitigating the risks associated with AI, including algorithmic harms and biases encoded into AI systems from their training data, requires more foundational changes rather than ex-post-facto fixes.
“There are questions about existential harms in the future, but there are a lot of harms happening right now. And it doesn’t have to be that way….It depends on who this AI is servicing, and in particular, the design of the technology—the decisions made in that design is really determining what our values will be,” she said.
Schmidt pushed back on the implication that all harms could be preempted with better design and training, arguing that leading AI programs are not simple machinery but complex, non-linear systems that often develop unforeseen capabilities. But he agreed with Sweeney’s assertion that Silicon Valley leaders have at times “rushed” products to market, adding, “They’ve found all sorts of problems, and then they’re busy correcting them. I think that’s the cycle, and it’s very hard.”
Luckily, he said, AI developers have evaluation cards and safety testing teams in place to mitigate as much risk as possible in advance.
But these measures are insufficient for protecting humanity, according to another panelist, Nate Soares. Soares is president of the Machine Intelligence Research Institute in Berkeley, California and co-author of If Anyone Builds It, Everyone Dies, a 2025 book on the existential risks posed by AI.
In leading AI labs, the primary focus areas for safety and governance today are interpretability research, or “trying to figure out what’s going on inside the AIs’ heads,” and model evaluation cards, “which are trying to figure out how dangerous the AIs are,” as Soares explained.
He likened these efforts to a comically inadequate attempt to curtail nuclear disasters. “If someone was making a nuclear power plant in your hometown, and you went to them and you said, ‘Hey, I hear that this uranium stuff can have lots of energy benefits, but also can melt down when things go badly. What have you guys got that makes you think you’re going to get the benefits and not the pitfalls?’ If the engineers say—‘Oh yeah, we’ve got two crack teams working on this; the first team is trying to figure out what the heck is going on inside, and the second team is trying to measure whether it’s currently exploding’—that’s not a good sign.”



