EMILY TENCH
Lethal autonomous weapons are weapons which select and engage targets without human intervention. On July 28th 2017, 116 experts submitted an open letter calling for a ban on such weapons. They argued that such a ban is necessary to prevent a global artificial intelligence (AI) arms race, which would pose a significant threat to humanity. It seems that such a race is already starting with Vladimir Putin recently stating “Artificial intelligence is the future, not only for Russia but for all humankind. Whoever becomes the leader in this sphere will become the ruler of the world.”
China and the USA are also in agreement that artificial intelligence will be the key technology underpinning national power in the future. In July, China’s State Council released a strategic plan to make the country “the front-runner and global innovation centre in AI” by 2030. Although the US does not have such a prescriptive plan, it is likewise investing heavily in AI development and in April, the Department of Defense established the Algorithmic Warfare Cross-Functional Team, designed to improve use of AI technologies across the Pentagon.
The start of a potential arms race is highly concerning since it may lead countries to develop artificial intelligence without adequate thought for safety. One of the biggest problems associated with AI is the value alignment problem, which is the question of how to ensure that the instructions we give to an AI system capture what we want it to do without any unintended outcomes. Stuart Russell compares this to the story of King Midas. When King Midas asked for everything he touched to turn to gold, he wanted to be rich and did not want his food and family to turn into gold. His instructions resulted in catastrophic unintended outcomes. If AI systems are developed without a proper focus on value alignment, the unintended outcomes could be species-ending. A secondary concern is that without adequate safety procedures and regulation, AI weapons could more easily fall into the hands of terrorists, dictators or others who would use them for malicious purposes.
It is vital that we avoid the escalation of the AI arms race. The long-term goal for international leaders such as the UN should be the creation of a Lethal Autonomous Weapons Convention, analogous to the Chemical Weapons Convention or the Nuclear Non Proliferation Treaty. This convention should prohibit the use, development, production, stockpiling and transfer of lethal autonomous weapons. At present, however, verification of such a treaty would be difficult since the differences between a weaponised system and a weaponised autonomous system are very small. Therefore, funding into further research is required urgently. The global AI arms race has already begun and the deployment of lethal autonomous weapons will be feasible within years, not decades.