The Lethal Autonomous Weapons Systems: A concrete example of AI’s presence in the military environment
Author: Jacopo Scipione
Abstract: Comprehending and analysing Artificial Intelligence (AI) is fundamental to embrace the next challenges of the future, specifically for the defence sector. Developments in this sector will involve both arms and operations. The debate is linked to the risks that automation could bring into the battlefield, specifically for the Lethal Autonomous Weapons Systems (LAWS). While AI could bring many advantages in risk detection, protection and preparation capabilities, it may bring also several risks on the battlefield and break the basic principles of International Law. Indeed, having the human operator "out of the loop", could lead to unprecedented challenges and issues. Such weapons may also strengthen terroristic groups, allowing them to plan mass attacks or specific assassinations with no human sacrifice.
The article, divided into three parts, aims to analyse the LAWS and its related issue. The first one introduces the LAWS and is applications worldwide. The second one summarizes the problems concerning International Humanitarian Law. Eventually, the last part is focused on the research for a proper regulation and the EU position on the topic.
Keywords: Artificial Intelligence – Lethal Autonomous Weapons Systems – OODA Loop – International Humanitarian Law – Killer robots
Summary: 1. Introduction – 2. LAWS and International humanitarian law – 3. LAWS’ development worldwide – 4.1 Looking for a regulation – 4.2 The European position on LAWS – 5. Conclusions