Security
Headlines
HeadlinesLatestCVEs

Headline

Nuclear Experts Say Mixing AI and Nuclear Weapons Is Inevitable

Human judgement remains central to the launch of nuclear weapons. But experts say it’s a matter of when, not if, artificial intelligence will get baked into the world’s most dangerous systems.

Wired
#vulnerability#web#ios#mac#intel

The people who study nuclear war for a living are certain that artificial intelligence will soon power the deadly weapons. None of them are quite sure what, exactly, that means.

In the middle of July, Nobel laureates gathered at the University of Chicago to listen to nuclear war experts talk about the end of the world. In closed sessions over two days, scientists, former government officials, and retired military personnel enlightened the laureates about the most devastating weapons ever created. The goal was to educate some of the most respected people in the world about one of the most horrifying weapons ever made and, at the end of it, have the laureates make policy recommendations to world leaders about how to avoid nuclear war.

AI was on everyone’s mind. “We’re entering a new world of artificial intelligence and emerging technologies influencing our daily life, but also influencing the nuclear world we live in,” Scott Sagan, a Stanford professor known for his research into nuclear disarmament, said during a press conference at the end of the talks.

It’s a statement that takes as given the inevitability of governments mixing AI and nuclear weapons—something everyone I spoke with in Chicago believed in.

“It’s like electricity,” says Bob Latiff, a retired US Air Force major general and a member of the Bulletin of the Atomic Scientists’ Science and Security Board. “It’s going to find its way into everything.” Latiff is one of the people who helps set the Doomsday Clock every year.

“The conversation about AI and nukes is hampered by a couple of major problems. The first is that nobody really knows what AI is,” says Jon Wolfsthal, a nonproliferation expert who’s the director of global risk at the Federation of American Scientists and was formerly a special assistant to Barack Obama.

“What does it mean to give AI control of a nuclear weapon? What does it mean to give a [computer chip] control of a nuclear weapon?” asks Herb Lin, a Stanford professor and Doomsday Clock alum. “Part of the problem is that large language models have taken over the debate.”

First, the good news. No one thinks that ChatGPT or Grok will get nuclear codes anytime soon. Wolfsthal tells me that there are a lot of “theological” differences between nuclear experts, but that they’re united on that front. “In this realm, almost everybody says we want effective human control over nuclear weapon decisionmaking,” he says.

Still, Wolfsthal has heard whispers of other concerning uses of LLMs in the heart of American power. “A number of people have said, ‘Well, look, all I want to do is have an interactive computer available for the president so he can figure out what Putin or Xi will do and I can produce that dataset very reliably. I can get everything that Xi or Putin has ever said and written about anything and have a statistically high probability to reflect what Putin has said,’” he says.

“I was like, ‘That’s great. How do you know Putin believes what he’s said or written?’ It’s not that the probability is wrong, it’s just based on an assumption that can’t be tested,” Wolfsthal says. “Quite frankly, I think very few of the people who are looking at this have ever been in a room with a president. I don’t claim to be close to any president, but I have been in the room with a bunch of them when they talk about these things, and they don’t trust anybody with this stuff.”

Last year, Air Force General Anthony J. Cotton, the military leader in charge of America’s nukes, gave a long speech at a conference about the importance of adopting AI. He said the nuclear forces were “developing artificial intelligence or AI-enabled, human led, decision support tools to ensure our leaders are able to respond to complex, time-sensitive scenarios.”

What keeps Wolfsthal up at night is not the idea that a rogue AI will start a nuclear war. “What I worry about is that somebody will say we need to automate this system and parts of it, and that will create vulnerabilities that an adversary can exploit, or that it will produce data or recommendations that people aren’t equipped to understand, and that will lead to bad decisions,” he says.

Launching a nuclear weapon is not as simple as one leader in China, Russia, or the US pushing a button. Nuclear command and control is an intricate web of early warning radar, satellites, and other computer systems monitored by human beings. If the president orders the launch of an intercontinental ballistic missile, two human beings must turn keys in concert with each other in an individual silo to launch the nuke. The launch of an American nuclear weapon is the end result of a hundred little decisions, all of them made by humans.

What will happen when AI takes over some of that process? What happens when an AI is watching the early warning radar and not a human? “How do you verify that we’re under nuclear attack? Can you rely on anything other than visual confirmation of the detonation?" Wolfsthal says. US nuclear policy requires what’s called “dual phenomenology” to confirm that a nuclear strike has been launched: An attack must be confirmed by both satellite and radar systems to be considered genuine. “Can one of those phenomena be artificial intelligence? I would argue, at this stage, no.”

One of the reasons is basic: We don’t understand how many AI systems work. They’re black boxes. Even if they weren’t, experts say, integrating them into the nuclear decisionmaking process would be a bad idea.

Latiff has his own concerns about AI systems reinforcing confirmation bias. “I worry that even if the human is going to remain in control, just how meaningful that control is,” he says. “I’ve been a commander. I know what it means to be accountable for my decisions. And you need that. You need to be able to assure the people for whom you work there’s somebody responsible. If Johnny gets killed, who do I blame?”

Just as AI systems can’t be held responsible when they fail, they’re also bound by guardrails, training data, and programming. They can not see outside themselves, so to speak. Despite their much-hyped ability to learn and reason, they are trapped by the boundaries humans set.

Lin brings up Stanislav Petrov, a lieutenant colonel of the Soviet Air Defence Forces who saved the world in 1983 when he decided not to pass an alert from the Soviet’s nuclear warning systems up the chain of command.

“Let’s pretend, for a minute, that he had relayed the message up the chain of command instead of being quiet … as he was supposed to do … and then world holocaust ensues. Where is the failure in that?” Lin says. “One mistake was the machine. The second mistake was the human didn’t realize it was a mistake. How is a human supposed to know that a machine is wrong?”

Petrov didn’t know the machine was wrong. He guessed based on his experiences. His radar told him that the US had launched five missiles, but he knew an American attack would be all or nothing. Five was a small number. The computers were also new and had worked faster than he’d seen them perform before. He made a judgement call.

“Can we expect humans to be able to do that routinely? Is that a fair expectation?” Lin says. “The point is that you have to go outside your training data. You must go outside your training data to be able to say: ‘No, my training data is telling me something wrong.’ By definition, [AI] can’t do that.”

Donald Trump and the Pentagon have made it clear that AI is a top priority, and have invoked the nuclear arms race to do it. In May, the Department of Energy declared in a post on X that “AI is the next Manhattan Project, and the UNITED STATES WILL WIN.” The administration’s “AI Action Plan” depicted the rush towards artificial intelligence as an arms race, a competition against China that must be won.

“I think it’s awful,” Lin says of the metaphors. “For one thing, I knew when the Manhattan Project was done, and I could tell you when it was a success, right? We exploded a nuclear weapon. I don’t know what it means to have a Manhattan Project for AI.”

Wired: Latest News

Nuclear Experts Say Mixing AI and Nuclear Weapons Is Inevitable