The American Army already deploys robot soldiers in Iraq. Equipped with tank tracks and automatic weapons, these robotic units, known as SWORDS (Special Weapons Observation Reconnaissance Detection Systems), allow humans to attack the enemy by remote control.
Last week an engineer at the Naval Surface Warfare Centre, an American weapons-research and test establishment, published a set of laws to govern operations by killer robots. Citing the precedent set by the Tomahawk Anti-Ship Missile, CAPTOR Mine, Aegis Ships, automatic Cruise missile defense, and Patriot automated air defense, John Canning made the following proposals:
- Let the machines target other machines
- Specifically, let’s design our armed unmanned systems to automatically ID, target, and neutralize or destroy the weapons used by our enemies –not the people using the weapons.
- This gives us the possibility of disarming a threat force without the need for killing them.
- We can equip our machines with non-lethal technologies for the purpose of convincing the enemy to abandon their weapons prior to our machines destroying the weapons, and lethal weapons to kill their weapons.
- Let men target men
- In those instances where we find it necessary to target the human (i.e. to disable the command structure), the armed unmanned systems can be remotely controllable by human operators who are “in-the-weapons-control-loop”
- Provide a “Dial-a-Level” of autonomy to switch from one to the other mode.
Canning quotes a legal specialist as saying, "We can target objects when they are military objectives and we can target people when they are military objectives. If people or property isn't a military objective, we don't target it. It might be destroyed as collateral damage, but we don't target it. Thus in many situations, we could target the individual holding the gun and/or the gun and legally there's no difference."
Now,
The Economist reports on the research of Ronald Arkin of the Georgia Institute of Technology, who is generating
an artificial conscience for battlefield robots to ensure that their use of lethal force follows the rules of ethics, based on existing ethical decision-making protocols (e.g. the Geneva Convention), rules of engagement, and other ethical and military requirements.
Incidentally, for anyone feeling a little lonely, Mr Arkin is also working on behavioural development for a humanoid robot "with the long-term goal of providing highly satisfying long-term interaction and attachment formation by a human partner." Hardly bears thinking about, does it...?!
So much for Isaac Asimov's three laws of robotics!