Killer robots: it’s not the AI that’s the problem

In a recent open letter, Tesla’s Elon Musk and others called for a ban on autonomous weapons, saying “Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.

Yet autonomous weapons are already with us, after a fashion. And artificial intelligence isn’t actually the biggest problem.

A bullet, during the second or so that it is in flight, autonomously follows the laws of physics. But the world is not likely to have changed much during that time. If shooting the bullet was appropriate, that will still be true when it hits. A cruise missile can fly for several hours, and home in on a precise spot, specified by GPS coordinates – although things may have changed during those hours of flight.

Smarter again is a heat-seeking or radar-guided missile, which can home in on an aircraft, even one doing it’s best to evade the threat – yet it cannot distinguish passenger aircraft from military aircraft. The next step up are systems guided by IFF, which can distinguish friend from foe. After that comes the kind of AI that Elon Musk is talking about.

The ultimate extreme is the “Menschenjäger” of Cordwainer Smith’s 1957 short story “Mark Elf.” The Menschenjägers were built by the “Sixth German Reich” to seek out and kill their non-German enemies (whom they could infallibly detect by their non-German thoughts). Being virtually indestructible, the last Menschenjäger had travelled around the planet on this mission 2328 times by the time the story is set. Since no Germans were alive at that point, there was nobody left to shut it down.

The real problem with the Menschenjägers was not their AI, but their persistence in time. A similar problem arises with that most stupid of autonomous weapons, the landmine. Sown in their tens of millions, landmines continue to kill and maim for decades after the war that buried them is over.

It isn’t really a matter of whether the weapon has AI or not – it’s whether the weapon has an off switch or a self-destruct mechanism. No weapon should keep on pointlessly killing people.