Tech

Eric Schmidt Thinks AI Is as Powerful as Nukes

An AI deterrence regime would necessitate an AI Hiroshima.
Untitled design(4)
Liyao Xie/ Getty Images/ Twitter Screengrab

Former Google CEO Eric Schmidt compared AI to nuclear weapons and called for a deterrence regime similar to the mutually-assured destruction that keeps the world’s most powerful countries from destroying each other.

Schmidt talked about the dangers of AI at the Aspen Security Forum at a panel on national security and artificial intelligence on July 22. While fielding a question about the value of morality in tech, Schmidt explained that he, himself, had been naive about the power of information in the early days of Google. He then called for tech to be better in line with the ethics and morals of the people it serves and made a bizarre comparison between AI and nuclear weapons.

Advertisement

Schmidt imagined a near future where China and the U.S. needed to cement a treaty around AI.  “In the 50s and 60s, we eventually worked out a world where there was a ‘no surprise’ rule about nuclear tests and eventually they were banned,” Schmidt said. “It’s an example of a balance of trust, or lack of trust, it’s a ‘no surprises’ rule. I’m very concerned that the U.S. view of China as corrupt or Communist or whatever, and the Chinese view of America as failing… will allow people to say ‘Oh my god, they’re up to something,’ and then begin some kind of conundrum. Begin some kind of thing where, because you’re arming or getting ready, you then trigger the other side. We don’t have anyone working on that, and yet AI is that powerful.”

AI and machine learning is an impressive and frequently misunderstood technology. It is, largely, not as smart as people think it is. It can churn out masterpiece-level artwork, beat humans at Starcraft II, and make rudimentary phone calls for users. Attempts to get it to do more complicated tasks, however, like drive a car through a major city, haven’t been going so well.

Advertisement

Schmidt imagined a near future where both China and the U.S. would have security concerns that force a kind of deterrence treaty between them around AI. He speaks of the 1950s and ’60s when diplomacy crafted a series of controls around the most deadly weapons on the planet. But for the world to get to a place where it instituted the Nuclear Test Ban Treaty, SALT II, and other landmark pieces of legislation, it took decades of nuclear explosions and, critically, the destruction of Hiroshima and Nagasaki.

The two Japanese cities America destroyed at the end of World War II killed tens of thousands of people and proved to the world the everlasting horror of nuclear weapons. The governments of Russia and China then rushed to acquire the weapons. The way we live with the possibility these weapons will be used is through something called mutual assured destruction (MAD), a theory of deterrence that ensures if one country launches a nuke, it’s possible that every other country will too. We don’t use the most destructive weapon on the planet because of the possibility that doing so will destroy, at the very least, civilization around the globe.

Despite Schmidt’s colorful comments, we don’t want or need MAD for AI. For one, AI hasn’t proved itself anywhere near as destructive as nuclear weapons. But people in positions of power fear this new technology and, typically, for all the wrong reasons. People have even suggested giving control of nuclear weapons over to AI, theorizing they’d be better arbiters of their use than humans.

The problem with AI is not that it has the potentially world-destroying force of a nuclear weapon; it’s that AI is only as good as the people who designed it and that they reflect the values of their creators. AI suffers from the classic “garbage in, garbage out” problem: Racist algorithms make racist robots, all AI carries the biases of its creators, and a chatbot trained on 4chan becomes vile.

This is something Demis Hassabis—the CEO of DeepMind, which trained the AI that’s beating Starcraft II players—seems to understand better than Schmidt. In a July interview on the Lex Fridman podcast, Fridman asked Hassabis how a technology as powerful as AI could be controlled and how Hassabis himself might avoid being corrupted by the power.

Hassabis' answer is about himself. “AI is too big an idea,” he said. “It matters who builds [AI], which cultures they come from, and what values they have, the builders of AI systems. The AI systems will learn for themselves… but there’ll be a residue in the system of the culture and values of the creators of that system.”

AI is a reflection of its creator. It can’t level a city in a 1.2-megaton blast. Not unless a human teaches it to do so.