THE HUMAN RACE VS. AUTONOMOUS WEAPONS
What if intelligent weapons could seek out their targets on their own? What if civilization were attacked by millions of “slaughterbots” at once? It is hard to imagine such a frightening scenario. And yet these weapons are already approaching the verge of development. Leading scientists caution that it’s imperative to stop them — now!
We’d be largely defenseless against an attack by intelligent autonomous weapons. Up until now, no nation has developed a satisfactory response,” says Stuart Russell. The UC Berkeley computer scientist is talking about a scenario in which groups of terrorists deploy thousands of “slaughterbots” against a major city like Washington or Moscow: lying drones programmed to seek out individuals as targets or open fire on people who simply share certain characteristics that the drones identify on their own. Science fiction?
“AUTONOMOUS WEAPONS ARE EASIER TO ACHIEVE THAN SELF-DRIVING CARS.”
WE’D BE LARGELY DEFENSELESS AGAINST AN ATTACK BY INTELLIGENT AUTONOMOUS WEAPONS. -STUART RUSSELL, UNIVERSITY OF CALIFORNIA
Russell is not a member of the military looking to secure a bigger budget for highly deadly offensive or defensive weapons. On the contrary: He wants lethal autonomous weapons (LAWs) to be prohibited in the same manner as nuclear or biological armaments. He is also neither a neo-Luddite nor an opponent of artificial intelligence (AI). Quite the opposite: Russell, a professor of computer science at the University of California, Berkeley, has written several standard works on AI and helped make the intelligent robots of the world more capable. But after more than 35 years of working in AI, he has become skeptical — though he continues to view the field positively on the whole: “Our entire civilization, everything we value, is based on our intelligence. And if we have access to a lot more intelligence, then there is really no limit to what the human race can do.” The same applies to LAWs; the key distinction is that they “know” no limits: Individuals could carry out an attack that claims a high number of victims, approaching the scale of the aftermath of a nuclear bomb blast. “The technology is easier to achieve than self-driving cars, which require far higher standards of performance,” says Russell. In fact, the first slaughterbot has already been deployed
… Armenia, April 2016: This tiny landlocked country in the Caucasus was caught in a conflict with neighboring Azerbaijan when Azerbaijani forces launched an Israeli-produced drone against a bus that carried Armenian recruits. Seven of them died. The IAI Harpy that was utilized for the aerial strike has about the same wingspan as a California condor (10 feet) and is able to operate totally autonomously. It located the bus and crashed into it, detonating its 50 pounds of explosives.
WEAPONS PRODUCED BY A 3-D PRINTER
“The U.S. military is working on a program that would allow them to use a 3-D printer to make thousands of disposable kill systems on demand,” says Frank Sauer, an expert in global security at Germany’s Bundeswehr (“armed forces”) University in Munich. “And they’ll function with a relatively high degree of autonomy.” The U.S. Defense Department’s elite research agency for innovative technology, the Defense Advanced Research Projects Agency (DARPA), was responsible for having invented the forerunner of the Internet, stealth technology for military fighter jets, and the GPS system, and it is already working on slaughterbots.
Only a few years ago, a computer was not able to detect the difference between a cat and a dog. Today the algorithms used to analyze data can identify relationships that elude even experienced experts — who can only look on in amazement. A researcher at Stanford University used a deep learning algorithm to evaluate white Americans who had self-identified as either straight or gay. He found that the algorithm could identify gay men from their photos with 81% accuracy, far better than human judges looking at the same pictures. Just a few years back, it took months for a computer to become “smart” enough to reach the grandmaster level in chess. Now Matthew Lai, a computer scientist at University College London in the UK, has developed a learning system in which a computer can teach itself to play championship chess. After only 72 hours — playing only against itself — the computer is more proficient than 98% of ranked human chess players. Creating a slaughterbot to seek out and eliminate a target would appear to be child’s play after that. After all, the use of facial recognition software is becoming increasingly widespread, even on today’s smartphones.
The fate of the gorillas could soon become our own: If AI is not equipped with an absolutely fail-safe software barrier, it might try to emancipate itself from the yoke of its creators, resulting in a catastrophic robotic takeover by our own inventions. “AI needs to be perfectly programmed,” Yudkowsky says. It may be impossible to reset it or to update or uninstall its software after the fact… “I’m not sure I’d want to be the one holding the kill switch for some superpowered AI,” contends entrepreneur and engineer Elon Musk. “You would be the first thing it kills.”