…the ethical and legal implications of the new technology already go far beyond the relatively circumscribed issue of targeted killing. Military robots are on their way to developing considerable autonomy. As noted earlier, UAVs can already take off, land, and fly themselves without human intervention. Targeting is still the exclusive preserve of the human operator—but how long will this remain the case? As sensors become more powerful and diverse, the amount of data gathered by the machines is increasing exponentially, and soon the volume and velocity of information will far exceed the controller’s capacity to process it all in real time, meaning that more and more decision-making will be left to the robot.
A move is already underway toward systems that allow a single operator to handle multiple drones simultaneously, and this, too, will tend to push the technology toward greater autonomy. We are not far from the day when it will become manifest that our mechanical warriors are better at protecting the lives of our troops than any human soldier, and once that happens the pressure to let robots take the shot will be very hard to resist. Pentagon officials who have been interviewed on the subject predictably insist that the decision to kill will never be ceded to a machine. That is reassuring. Still, this is an easy thing to say at a point when robots are not yet in the position to take the initiative against the enemy on a battlefield. Soon, much sooner than most of us realize, they will be able to do just that.
While the Pentagon may state the decision to kill will never be ceded to a machine, I'm not so sure. What if another country, with similar technological capability, delegates kill decisions to their toy army? A “robot” v. “human controlled robot” battle will quickly be won by the robot. A robot without needing a human to pull the trigger can process more information and act much quicker than a human. It seems almost inevitable that if one country cedes kill decisions to robots we all will.