A number of thoughtful people (including Stephen Hawking, Nick Bostrom, Bill Gates) believe we should take the threat of AI insurrection seriously. They argue that in decades to come we could very well create some sort of conscious entity that might decide the planet would be a nicer place without us.
In the meantime there are lesser but more urgent threats—machines that would not exterminate our species but might make our lives a lot less fun. An open letter released earlier this week, at the International Joint Conference on AI, calls for an international ban on autonomous weapon systems.
Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.
When I last checked, the letter had 2,414 signers who identify themselves as AI/robotics researchers, and 14,078 other endorsers. I’ve added my name to the latter list.
A United Nations declaration, or even a multilateral treaty, is not going to totally prevent the development and use of such weapons. The underlying technologies are too readily accessible. The self-driving car that can deliver the kids to soccer practice can also deliver a bomb. The chip inside a digital camera that recognizes a smiling face and automatically trips the shutter might also recognize a soldier and pull the trigger. As the open letter points out:
Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.
What would Isaac Asimov say about all this?
I was lucky enough to meet Asimov, though only once, and late in his life. He was in a hospital bed, recovering from heart surgery. He handed me his business card:
ISAAC ASIMOV
Natural Resource
No false modesty in this guy. But despite this braggadocio, he could equally well have handed out cards reading:
ISAAC ASIMOV
Gentle Soul
Asimov was a Humanist with a capital H, and he endowed the robots in his stories with humanistic ethics. They were the very opposite of killer machines. Their platinum-iridium positronic brains were hard-wired with rules that forbade harming people, and they would intervene to prevent people from harming people. Several of the stories describe robots struggling with moral dilemmas as they try to reconcile conflicts in the Three Laws of Robotics.
Asimov wanted to believe that when technology finally caught up with science fiction, all sentient robots and other artificial minds would be equipped with some version of his three laws. The trouble is, we seem to be stuck at a dangerous intermediate point along the path to such sentient beings. We know how to build machines capable of performing autonomous actions—perhaps including lethal actions—but we don’t yet know how to build machines capable of assuming moral responsibility for their actions. We can teach a robot to shoot, but not to understand what it means to kill.
Ever since the 1950s, much work on artificial intelligence and robotics has been funded by military agencies. The early money came from the Office of Naval Research (ONR) and from ARPA, which is now DARPA, the Defense Advanced Research Projects Agency. Military support continues today; witness the recently concluded DARPA Robotics Challenge. As far as I know, none of the projects currently under way in the U.S. aims to produce a “weaponized robot.” On the other hand, as far as I know, that goal has never been renounced either.
Part of AI is going to be the merging of humans and machines… humans becoming more machine-like, while machines become more human-like, the lines of demarcation blurring. I’m doubtful that autonomous weapon systems can be successfully banned under such evolution.
dude, weaponized robots have existed for over a decade now, perfected in iraq, they are called DRONES. there wasnt much resistance at the time to the military industrial complex aka WARMACHINE which is now making very lucrative $$$ off of them.
and of course remote control bombs around for many decades are basically “robotic systems” as well. a complex/ multidimensional issue. its an interesting factoid that legendary/ visionary Tesla predicted these uses over a century ago.
but actually this is a very old story. we have been building dangerous, highly lethal weapons that far outpace even humanity’s moral sense. eg nuclear weapons. the obama deal on iran is a step in the right direction. yes we need governing bodies as aware of military robotics as they are about nuclear weapons & nonproliferation of ALL deadly military technology. but notice how politicized the issue got….
more on robotics
The IJCAI open letter makes a point of distinguishing between remotely controlled weapons like the Predator drone, with a human operator making the decision to fire a missile, and an autonomous weapon that would choose targets and decide to shoot at them without human supervision. I’m not sure the line is all that sharp, and I’m also not sure we haven’t crossed it already. (The Phalanx automated cannon, which is meant to protect ships at sea from cruise missile attacks, can be set to engage targets without human intervention — and that’s a technology that goes back to the 1970s.) Still, there are classes of robotic weapons that we can now choose to develop, or not. I am pleased to see that some 2,400 of the people who might well be involved in building those systems are urging us to choose not.
ah! so a few thousand scientists/ engrs finally decided to draw a “line in the sand” on autonomous weaponized robotics but have had virtually nothing to say about the drones that have killed thousands already, many innocent civilians/ bystanders and nobody has any (“formally declared”) issue of the verbal sleight-of-hand of military defining all “casualties” (DEATHS/KILLINGS) as “military combatants”.
but anyway where was that line on development of nuclear weapons decades ago? they were created by scientists like oppenheimer et al who never or barely questioned the morality and very likely, autonomous weapons will be created by scientists who have no moral compunctions, no meaningful/ actionable ones anyway. lets face it theres a huge moral blind spot by humans on the use of technology, esp military technology.
robotics is a new area of weaponization that is not gonna solve this deep issue. humans are the big part of the problem and a little petition is not gonna change much. oh and by the way do you know how many signatures are on 911 truth petition(s)? and what effect has that had?
anyway all that said, just venting, not really against your personal ideas in this blog, which does bring some small )( awareness to the very damning issue, but which is a pandoras box far deeper than merely autonomous robots….
I though you ment Alpocalypse :(
Mines are a class of weapons which have been killing autonomously for decades. While most nations in the world have ratified a ban on anti-personnel mines, some of the most powerful such as the US, China, Russia and India have not. I appreciate the call for a ban on autonomous weapons of the AI kind, but I doubt that it will be successful in those nations that do not even ban mines.
Good point.