18 April 2007

Terminator: Rise of the Machines

TerminatorThe American Army already deploys robot soldiers in Iraq. Equipped with tank tracks and automatic weapons, these robotic units, known as SWORDS (Special Weapons Observation Reconnaissance Detection Systems), allow humans to attack the enemy by remote control.

Last week an engineer at the Naval Surface Warfare Centre, an American weapons-research and test establishment, published a set of laws to govern operations by killer robots. Citing the precedent set by the Tomahawk Anti-Ship Missile, CAPTOR Mine, Aegis Ships, automatic Cruise missile defense, and Patriot automated air defense, John Canning made the following proposals:

  • Let the machines target other machines
    • Specifically, let’s design our armed unmanned systems to automatically ID, target, and neutralize or destroy the weapons used by our enemies –not the people using the weapons.
    • This gives us the possibility of disarming a threat force without the need for killing them.
    • We can equip our machines with non-lethal technologies for the purpose of convincing the enemy to abandon their weapons prior to our machines destroying the weapons, and lethal weapons to kill their weapons.
  • Let men target men
    • In those instances where we find it necessary to target the human (i.e. to disable the command structure), the armed unmanned systems can be remotely controllable by human operators who are “in-the-weapons-control-loop”
  • Provide a “Dial-a-Level” of autonomy to switch from one to the other mode.
Canning quotes a legal specialist as saying, "We can target objects when they are military objectives and we can target people when they are military objectives. If people or property isn't a military objective, we don't target it. It might be destroyed as collateral damage, but we don't target it. Thus in many situations, we could target the individual holding the gun and/or the gun and legally there's no difference."

Now, The Economist reports on the research of Ronald Arkin of the Georgia Institute of Technology, who is generating an artificial conscience for battlefield robots to ensure that their use of lethal force follows the rules of ethics, based on existing ethical decision-making protocols (e.g. the Geneva Convention), rules of engagement, and other ethical and military requirements.

Incidentally, for anyone feeling a little lonely, Mr Arkin is also working on behavioural development for a humanoid robot "with the long-term goal of providing highly satisfying long-term interaction and attachment formation by a human partner."  Hardly bears thinking about, does it...?!

So much for Isaac Asimov's three laws of robotics!

3 comments:

Rebecca said...

"If an autonomous robot kills someone, whose fault is it?"

The BBC's report on the debate notes that Samsung has already developed a robotic sentry equipped with a machine gun and two cameras to guard the border between North and South Korea.

Anonymous said...

The other day I was captured on a police camera and issued an automatic fine for driving in a bus lane because a van had broken down and was obstructing the lane for cars. I was fairly angry at losing £50, but it pales into insignificance compared with the loss of a life.
The point is that humans have the ability to adapt to unpredicted situations and employ common sense (no right-thinking police officer would have fined me in that situation). No matter how sophisticated the modelling that might in the future make them appear "intelligent", robots can only follow rules.
The danger is over-confidence in our ability to predict situations and therefore create appropriate control mechanisms for autonomous robots. Until we can get speed cameras right, I think we should hold back from unleashing potential robot killers on the world.

Anonymous said...

True,a robot is programmed to react to things in a predictacble fashion,life as we all know is anything but predictable.Therefore when a robot enters this chaotic world of ours the consequences maybe disaterous.As for who is to blame,I don't know.If robots have intelligence which is comparable to that of a human and appreciate the nuances and idiosycrancies that some lesser intelligent humans have does that make them smarter than a particular person then therefore as to blame as an uneducated individual that breaks into your house and robs you blind?
Thats got me thinking...And rambling,I bet that made no sense.
JAMIE(Just round the corner,see I remebered to have a ganders)