Meet Ed Zitron, AI’s original prophet of doom

America post Staff
13 Min Read



Ed Zitron peels off his green button-up shirt to reveal the gray tee beneath. Now properly uniformed, two cans of Diet Coke queued up before him, he’s ready to record this week’s episode of his podcast, Better Offline, at audio behemoth iHeartMedia’s midtown Manhattan studio.

The topic on this July afternoon, as usual, is artificial intelligence. One of Zitron’s guests, screenwriter, director, and producer Brian Koppelman, talks about paying $200 a month for ChatGPT Pro. When Koppelman earnestly asks, “Do you not think AI is mind-bogglingly great at times?” Zitron’s answer—“No!”—comes so quickly it seems to spring directly from his cerebral cortex.

It would have been startling if he’d responded any other way.

As AI has become the tech industry’s principal obsession, Zitron—who runs a public relations firm that represents technology companies—has developed an unexpected side hustle as one of its highest- profile naysayers. “I’ve tried all of these different things, and I still can’t tell you with clarity what it is that’s so amazing with these products,” he tells me.

Countless people in and around the tech industry share Zitron’s dim view of generative AI’s usefulness, the billions of dollars that companies are pouring into the technology, and its voracious appetite for computing resources. But his take-no-prisoners punditry sets him apart from other noted gadflies such as cognitive scientist Gary Marcus.

On Better Offline and in his email newsletter, Where’s Your Ed At, he’s particularly unsparing in his appraisal of CEOs such as Meta’s Mark Zuckerberg (“a monster”), OpenAI’s Sam Altman (“a con man”), and Microsoft’s Satya Nadella (“either a liar or a specific kind of idiot”).

Zitron says that his work is motivated by “[seeing] these bastards and what they’re doing, how much money they’re making doing it, and how shameless they are.” He has his own name for the pursuit of growth above all other goals, regardless of its impact on customers and society at large: the “rot economy.” He believes the current AI boom will end in disaster.

“When it’s very obvious the money isn’t there, there’s going to be a big, horrible correction with tech stocks—a harmful one,” he declares, referring to the fallout should AI companies not ever be profitable. “I say this with a degree of trepidation, because it’s not going to be fun.”

Zitron’s influence in the AI conversation is palpable and still expanding. On Bluesky, where he has 169,700 followers, attorney and activist Will Stancil recently wrote, “People love to say ‘I’m begging you to read something by an actual expert’ and they mean, specifically, Ed Zitron.” Produced by Cool Zone Media, an iHeartMedia subsidiary specializing in podcasts of a progressive bent, Better Offline is regularly among the 15 most popular tech shows on Spotify and Apple Podcasts. A bustling Reddit forum spun off from the podcast attracts 74,000 people a week with links to news stories about AI and caustic, sometimes darkly funny conversations about them.

Where’s Your Ed At—its name riffs on “Where’s Your Head At,” a 2001 song by U.K. electronic music duo Basement Jaxx—has more than 80,000 readers, about 3,000 of whom receive bonus newsletters available exclusively to subscribers who pay $70 per year and up, an option Zitron added last June. His book on tech dysfunction, Why Everything Stopped Working, is due out in late 2026 or early 2027.

Yet as prolific as Zitron is, he doesn’t feel remotely tapped out. “As things get more brittle and chaotic,” he says, “there’s only going to be more things for me to rifle through and explain to people.”


Zitron didn’t set out to build a mini media empire around AI doomerism. The West London–born tech enthusiast, a onetime video game journalist, founded his company EZPR in New York in 2012 and went on to write two books about public relations. When he sent out his first Where’s Your Ed At newsletters, in 2019, he focused on personal interests such as gaming, his Peloton, and the NFL draft. And then he didn’t get around to publishing again for a year and a half.

Late in 2020, he caught COVID. Suddenly in need of activities to fill his time, he found his newsletter a welcome distraction. “If I’m not writing, I haven’t really thought through anything,” he explains. “So I just started writing every day.” Increasingly, he turned his attention to the tech industry’s ills, leading to the February 2023 piece in which he coined the term the “rot economy.” It quickly went viral.

Over time, Zitron has found a voice that comes off as entirely uncensored. He runs his newsletters by an editor—fellow Brit and tech-skeptic newsletter author Matt Hughes—but you wouldn’t know it from their style and substance. One particularly operatic recent example, last July’s “The Hater’s Guide to the AI Bubble,” marshals 14,500 words of facts, figures, and spicy commentary (Salesforce’s claims for its Agentforce AI are “a blatant fucking lie”) to argue that tech giants and startups alike are wasting billions pushing products built “on vibes and blind faith.” He also turned “The Hater’s Guide” into a four-part Better Offline series, where his accent and dramatic flair only heighten its impact. On Reddit, one fan called him “the David Attenborough of AI critique.”

Zitron says that his supremely pissed-off persona isn’t just a schtick. “It’s just never come easily to me to pretend to be anything other than what I am,” he stresses. His friend and fellow tech critic Molly White, author of the crypto-busting newsletter Citation Needed, agrees. “He’s very passionate about the stuff that he is writing about,” she says. “I think it sort of consumes him and his attention.”

Yet the full story of his relationship with AI is more complex. Along with savaging the technology in newsletters and on podcasts, he pitches its benefits to media outlets (including Fast Company) on behalf of EZPR’s clients. Startups that he’s repped range from technical assessment platform CodeSignal to Nomi, which touts its chatbot’s ability to serve as a virtual companion, girlfriend, or boyfriend.

Zitron rejects the idea that his two jobs—AI basher and AI promoter—present any fundamental tension or conflict of interest. At EZPR, he says, “What I advocate for are companies with real purpose that do things their customers like, that build sustainable businesses based on actual use cases.” Does his growing fame as a writer and podcaster benefit his PR firm? He allows that it helps—journalists recognize his name and are more likely to open his emails—but considers that a side effect. The point, he says, is to speak his mind on a topic he cares deeply about.


Evidence is mounting that some of the initial exuberance over generative AI was, in fact, irrational. A recent MIT study reported that 95% of enterprise pilot programs involving the technology hadn’t shown a return on investment; another from Bain says that even by 2030, the tech industry might be $800 billion short of finding enough new revenue to fund the computing resources necessary to keep up with demand for AI. Speaking with reporters in August, OpenAI’s Altman admitted the existence of a bubble. “Are we in a phase where investors as a whole are overexcited about AI?” he asked. “My opinion is yes.” Nonetheless, he added that Open­AI intends to invest trillions in additional data center infrastructure.

That same month, OpenAI released a new version of ChatGPT built atop GPT-5, the latest update to its large language model. Once widely anticipated as a giant leap forward, it landed with a thud once users tried it and deemed it less than transformative. To Zitron, it was a classic example of the company’s puffery exceeding its product road map. “Two years ago, people were talking about GPT-5 like it was going to be AI Jesus,” he says. “I feel that OpenAI likely had to get something out the door.”

Arguing that Altman’s stated plans for OpenAI—such as building 250 gigawatts of data center capacity in eight years—are impossible, Zitron continues to press the case that the company will run out of venture funding before reaching self-sufficiency. “OpenAI is not building ‘the AI industry,’ as this is capacity for one company that burns billions of dollars and has absolutely no path to profitability,” he wrote in an October newsletter. “This is a giant, selfish waste of money and time, one that will collapse the second that somebody’s confidence wavers.“

OpenAI’s failure, he contends, could take out other companies such as cloud-computing provider CoreWeave. It would also inflict serious damage on giants such as SoftBank, which led OpenAI’s $40 billion investment round last March, and Nvidia, whose chips power most of the world’s generative AI. Citing one VC’s estimate that AI funding could dry up within six quarters, Zitron has said the industry could face “total collapse” in early 2027.

Even as the industry braces for a correction, Zitron’s prediction that it will effectively cease to exist makes him an outlier. “I just don’t think that Ed makes a strong case that this is going to happen,” says Timothy B. Lee, author of the newsletter Understanding AI. You don’t need to buy Altman’s utopian vision of “intelligence too cheap to meter” to accept the possibility that AI has a future. OpenAI going under would mean it never found a way to operate at a profit, regardless of any technological efficiencies, price adjustments, or new markets yet to come.

In his newsletter and on his podcast, Zitron projects an air of ferocious certitude. In person, he is willing to toy with the notion that his prognostications might not pan out. Characterizing himself as “a brokenhearted romantic” when it comes to tech, he says he’d welcome being proven wrong—and would write about it.

“It’ll be really annoying, and I really don’t think it’ll happen,” he emphasizes. “But the only way to do this [work] honestly is to be prepared for that, to be willing for that to happen.” As a commentator, Zitron’s stock-in-trade is the gusto with which he dismantles assessments of AI he considers invalid. Now the only question is whether he’ll get to say he told us so—or end up being his own ripest target.



Source link

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *