Stephen Hawking, Elon Musk, Steve Wozniak Want Ban on Autonomous Robots as Artificial Intelligence Weapons

By Mark Rollins
A Predator Drone
Not an Autonomous Weapon. Engadget

A recent open letter by the Future of Life Institute has called for a ban on all autonomous weapons, and it has support from big names in science and industry including Elon Musk, Stephen Hawking, and Steve Wozniak.  It could be the first step in preventing an AI war filled with autonomous killing machines, and this has left many wondering if we are really at a point where we can worry about a robo-apocalypse. 

According to CNET, many robotics experts from around the world have called a ban on autonomous weapons, with a warning that an artificial intelligence revolution in warfare could spell disaster for humanity.  This ban was done in the form of an open letter published by the Future of Life Institute, who has a mission that is explicitly described as "developing optimistic visions of the future".  The letter was signed by hundreds of AI and robotics researchers that include the well-known Stephen Hawking, Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, not to mention the cognitive scientist Daniel Dennett. 

The National Monitor reports that the argument is for a ban on AI weapons, or basically offensive autonomous weapons.  The signers of the letter fear that AI weapons, like other types of weapons, will eventually end up on the black market and lead to some very disastrous events.  

Engadget reports that there is also fear of an "AI arms race" in which it would be easy to build robotic armies, to do things like assassinations, authoritarian oppression, terrorism, genocide, and everything else seen in nightmare dystopian science fiction scenarios like The Terminator and The Matrix film franchises. 

There is nothing legally binding in the letter, but it could be enough to inspire a United Nations talk of a global ban of AI weapons.  After all, there are bans on chemical weapons, even though some countries are not interested in the rules of war.  In other words, it is possible that certain countries could develop AI weapons just like "dirty-bombs". 

This letter is going to be presented in Buenos Aires at the International Join Conference on Artificial Intelligence, according to Gizmodo.  The letter in its entirety is posted there, and defines autonomous weapons as those that select and engage targets without human intervention.  This doesn't include weapons like cruise missiles or remotely piloted drones, as humans have to make the targeting decision. 

No, this is about programming machines to go into places and start killing pre-programmed or random targets.  It is something that the military would want in cases where it is too dangerous to send in human soldiers.  Of course, everyone fears a scenario where the machines become self-aware or just malfunction and just start killing everything.  As mentioned before, this is a common premise for science fiction scenarios, but the fact that we can develop practical killing technology is by no means a fiction. 

There was a time when AI weapons were considered the things of speculative fiction, but with technology getting cheaper and easier to program, AI weapons could easily be made today and have serious lethal force.   In other words, how hard would it be to create a Terminator-like robot and program it to stop the heartbeats of anything in a certain area using guns?  Maybe this open letter can prevent something like that from ever really happening.