By Ben Austen, Popsci
Last August, U.S. Navy operators on the ground lost all contact with a Fire Scout helicopter flying over Maryland. They had programmed the unmanned aerial vehicle to return to its launch point if ground communications failed, but instead the machine took off on a north-by-northwest route toward the nation's capital. Over the next 30 minutes, military officials alerted the Federal Aviation Administration and North American Aerospace Defense Command and readied F-16 fighters to intercept the pilotless craft. Finally, with the Fire Scout just miles shy of the White House, the Navy regained control and commanded it to come home. "Renegade Unmanned Drone Wandered Skies Near Nation's Capital," warned one news headline in the following days. "UAV Resists Its Human Oppressors, Joyrides over Washington, D.C.," declared another.
The Fire Scout was unarmed, and in any case hardly a machine with the degree of intelligence or autonomy necessary to wise up and rise up, as science fiction tells us the robots inevitably will do. But the world's biggest military is rapidly remaking itself into a fighting force consisting largely of machines, and it is working hard to make those machines much smarter and much more independent. In March, noting that "unprecedented, perhaps unimagined, degrees of autonomy can be introduced into current and future military systems," Ashton Carter, the U.S. undersecretary of defense for Acquisition, Technology and Logistics, called for the formation of a task force on autonomy to ensure that the service branches take "maximum practical advantage of advances in this area."
In Iraq and Afghanistan, U.S. troops have been joined on the ground and in the air by some 20,000 robots and remotely operated vehicles. The CIA regularly slips drones into Pakistan to blast suspected Al Qaeda operatives and other targets. Congress has called for at least a third of all military ground vehicles to be unmanned by 2015, and the Air Force is already training more UAV operators every year than fighter and bomber pilots combined. According to "Technology Horizons," a recent Air Force report detailing the branch's science aims, military machines will attain "levels of autonomous functionality far greater than is possible today" and "reliably make wide- ranging autonomous decisions at cyber speeds." One senior Air Force engineer told me, "You can envision unmanned systems doing just about any mission we do today." Or as Colonel Christopher Carlile, the former director of the Army's Unmanned Aircraft Systems Center of Excellence, has said, "The difference between science fiction and science is timing."
We are surprisingly far along in this radical reordering of the military's ranks, yet neither the U.S. nor any other country has fashioned anything like a robot doctrine or even a clear policy on military machines. As quickly as countries build these systems, they want to deploy them, says Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield in England: "There's been absolutely no international discussion. It's all going forward without anyone talking to one another." In his recent book Wired for War: The Robotics Revolution and Conflict in the 21st Century, Brookings Institution fellow P.W. Singer argues that robots and remotely operated weapons are transforming wars and the wider world in much the way gunpowder, mechanization and the atomic bomb did in previous generations. But Singer sees significant differences as well. "We're experiencing Moore's Law," he told me, citing the axiom that computer processing power will double every two years, "but we haven't got past Murphy's Law." Robots will come to possess far greater intelligence, with more ability to reason and self- adapt, and they will also of course acquire ever greater destructive power. So what does it mean when whatever can go wrong with these military machines, just might?
I asked that question of Werner Dahm, the chief scientist of the Air Force and the lead author on "Technology Horizons." He dismissed as fanciful the kind of Hollywood-bred fears that informed news stories about the Navy Fire Scout incident. "The biggest danger is not the Terminator scenario everyone imagines, the machines taking over—that's not how things fail," Dahm said. His real fear was that we would build powerful military systems that would "take over the large key functions that are done exclusively by humans" and then discover too late that the machines simply aren't up to the task. "We blink," he said, "and 10 years later we find out the technology wasn't far enough along."
Dahm's vision, however, suggests another "Terminator scenario," one more plausible and not without menace. Over the course of dozens of interviews with military officials, robot designers and technology ethicists, I came to understand that we are at work on not one but two major projects, the first to give machines ever greater intelligence and autonomy, and the second to maintain control of those machines. Dahm was worried about the success of the former, but we should be at least as concerned about the failure of the latter. If we make smart machines without equally smart control systems, we face a scenario in which some day, by way of a thousand well-intentioned decisions, each one seemingly sound, the machines do in fact take over all the "key functions" that once were our domain. Then "we blink" and find that the world is one we no longer are able to comprehend or control.
~ more... ~
No comments:
Post a Comment