This policy brief addresses military applications of AI in the sense of partially autonomous lethal weapon systems (PALWS) and logistical AI units. The systems that I call ‘autonomous systems of normative control’ (ASNCs) are comparable to intelligent speed assistance (ISA)-systems in cars. ISA-systems alert or correct drivers when exceeding the speed limit using road-sign recognition and speed-limit databases linked to geoposition data. Correspondingly, ASNCs should block the unlawful use of military applications of AI, for instance, in the case of a war of aggression or alert commanders if an action is disproportionate or a selected target civilian.
I promote a technology-centered approach, which is in line with the multilateral 2023 Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy and the technical recommendations in the report of the 2023 session of the UN Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. I argue that PALWS and logistical AI units in the military should be equipped with ASNCs to contribute to ensuring that they are used in compliance with international humanitarian law (IHL), most importantly the principles of proportionality and harm minimization.
Furthermore, ASNCs should include blocking mechanisms to contribute to ensuring that PALWS and logistical AI units are neither used in wars of aggression nor against domestic peaceful protesters. In a technological sense, ASNCs likely require a hybrid approach to AI systems, combining data-driven and rule-based elements and much simpler blocking mechanisms based on geolocation data. Whilst it is not possible to outsource moral or legal responsibility to machines, it is plausible that ASNCs contribute to making military decision-making on the battlefield more responsible in a legal and ethical sense.In parallel with this technology-centered approach, national and international attempts to regulate military applications of AI should be pursued further.
However, the development of ASNCs does not necessarily constitute a reaction to governmental regulations but could also voluntarily be advanced as de facto industry standard by producers of military technology. Rather than refraining from the production of PALWS and logistical AI units for the military, European producers of military technology should aim at leading in research and development and establishing a standard made in Europe, including ASNCs that contribute to guaranteeing their use within the boundaries of legal and ethical principles. At the same time, it must already be warned that these systems should not be abused for ‘ASNC-washing’ to justify arms exports to authoritarian regimes and that the establishment of a de facto standard can only constitute one element within a broader toolkit of measures to regulate the military use of AI.
Johannes THUMFART is a senior and postdoctoral researcher at the research group Law, Science, Technology, and Society (LSTS) of Vrije Universiteit Brussel and an adjunct lecturer in the ethics of international security management at the Berlin School of Economics and Law. His research is focused on the ethics of international security, the military use of AI, and digital sovereignty in BRICS states. Thumfart received his PhD in the history and philosophy of international law at Humboldt Universität Berlin. He has been awarded a Marie Skłodowska-Curie Fellowship and held numerous teaching and research positions in Germany, France, Mexico, and Belgium. He contributed to some of the largest and most respected German newspapers such „der Spiegel“ and „die Zeit“. His monograph on digital sovereignty will be published with Palgrave Macmillan in summer 2024. His research is published in journals such as European Journal of International Security, AI and Ethics, and Global Studies Quarterly.