
It seems as if we have entered into the era of AI warfare that the movies always warned us about.
Some of this has, admittedly, been happening for years. Much of war is already conducted via drone. Militaries around the world now run high-fidelity simulations to plan potential attacks, and some soldiers even use virtual reality. A new generation of defense tech companies is competing for its slice of the military-industrial complex.
But now it’s clear that defense officials are turning to chatbots for serious combat and military missions, including operations that aim to capture, and even take out, heads of state. This was the case earlier this year, when the United States launched an operation to capture Nicolás Maduro, the then-president of Venezuela and now federal detainee. It was also true this past Friday, when the American military launched a major attack on the Iranian regime and killed the country’s leader, the Ayatollah Ali Khamenei.
Both operations involved Claude, the suite of large language models created by the frontier AI lab Anthropic.
How did we get here? The U.S. military has sought and developed high-tech tools for decades. Modern sensor and surveillance platforms have allowed it to collect ever more data, and, in turn, use that data as the foundation for new algorithmic models. The exact definition of artificial intelligence has always been pliable, but even in the 2000s, research groups like DARPA were pursuing robotic and autonomous vehicle projects. Military organizations supported early efforts to use machine learning, too.
The military’s AI push became even more formalized in 2017, when the Defense Department announced Project Maven, an effort meant to streamline military data platforms and create a foundation for deploying algorithms and other advanced technologies, including computer vision and object detection, on the battlefield. After internal pushback and widespread protests, Google backed out of building Maven, and Palantir now provides the primary technology for the tool. In 2018, the U.S. Armed Forces also created the Joint Artificial Intelligence Office to centralize its work on emerging technology. This later became the Chief Digital and Artificial Intelligence Office, which aims to “accelerate” the adoption of AI across military branches.
What makes this moment feel so uncanny is that the military seems to be using the same AI tools ordinary consumers use, but in far more violent contexts. And because those tools are so familiar, it’s easy to imagine the military using them in the same casual, prompt-and-response way we do. Perhaps, as one internet user suggested, someone in the DoD simply wrote to Claude: “Claude, kidnap the dictator of Venezuela… Make no mistakes,” in much the same way we ask it to squeeze out one more email reply.
(For the record, when I ask Claude about its role in these operations, it denies any involvement: “I didn’t help with any such operations,” my chatbot tells me. “I’m Claude, an AI assistant made by Anthropic. I don’t have operational capabilities, I don’t take actions in the world, and I have no involvement in geopolitical or covert operations of any kind.”)
We know, though, that Claude was used in recent operations, even if the AI was probably doing something far more complex than responding to an offhand prompt. There are many questions about how, exactly, Claude was used in the Venezuela and Iran operations. But we do know that Claude is highly popular inside the military—and across the government. Former Defense Department AI officials and Palantir employees told me last week that the tool works alongside Maven, the military’s flagship AI program. We also know that, at least during the Maduro operation, Anthropic’s technology was accessed through a classified service offered to the military via Palantir. Very likely, this was something far more complicated than simply asking Claude to draw up an attack plan and going with it.
This isn’t going away. Despite the federal government’s ongoing effort to purge Anthropic’s tech from its systems, there’s no sign the agency is done with LLMs. OpenAI and xAI have also won large DoD contracts, and, in the past week, both companies signed agreements that will allow their technology to be used on classified systems. (Hooking up a technology from xAI or OpenAI to Defense Department systems might be as simple as connecting them via an API, a former Palantir employee tells me.) The DoD also maintains a dedicated generative AI resource called GenAI.mil.
It isn’t surprising why. I often use chatbots, platforms like Claude and ChatGPT, to conduct minorly annoying research tasks I would rather not do, along with countless other things that have made me far more productive. But they can also make me more careless, and I know how tempting it is to offload thinking to a third, entirely technological party. That makes it all the more unnerving to remember that the US military is using these same chatbots in ways that are far more secretive, and far more geopolitically significant.



