Release peace: the magazine
Release peace: the magazine
Analysis & Background Stories on International Affairs
The Dehumanisation of War is in Full Swing
Article by: Helen Kurvits
Photo credit: Sergeant Tom Evans (UK MoD)
A Balance to be Struck
Rapid technological developments have become increasingly influential in the military domain. According to research from the Peace Research Institute Frankfurt, Lethal Autonomous Weapons Systems, also known as LAWS, are distinguished by their capability to operate in “uncontrolled” environments in which they are triggered by their surroundings. LAWS are also part of “emerging” technologies in the military context. The emerging technology comprises anything from the military use of AI and machine learning to modern missile technology. In a nutshell, autonomous weapons are capable of selecting targets and destroying the target without human intervention and are sometimes called “robotic weapons”. But, it is exactly this lack of human components that have sparked debates over the use of LAWS and a potential ban on their use in the future. Arguments against the use of LAWS reflect various ethical, legal and security concerns. However, no consensus prevails.
The Advantages of LAWS?
Before delving into the arguments raised against the use of LAWS, let us first look at the reasons this technology has found its way to the military sphere. The use of AI entails great potential to mitigate the risks of human error. The potential for precision and reliability is believed to result in fewer adverse humanitarian consequences. The use of technology can enhance the accuracy of attacks. The rise in the military importance of LAWS can be attributed to two main factors. First, the use of these systems enables one to avoid risking the lives of one’s own soldiers. Second, the lack of an onboard human operator opens possibilities for more efficient designs as it saves both space and weight. Indeed, the use of LAWS can bring several benefits but there is also another side to the coin.
… And Drawbacks?
One of the primary arguments against using LAWS is that autonomous weapons are incapable of distinguishing between soldiers and civilians. This stems from the sensory and cognitive limitations of the system. Moreover, the systems are not capable of recognising a surrender. Besides, concerns are raised over the “inherent unpredictability” which arises due to the uncertainty regarding the details of an attack, such as when, where or why an attack will take place. When using weapons that can self-initiate attacks, concerns regarding unpredictability are fast to arise.
Some other dilemmas come from the concerns of human dignity. In this case, the primary question becomes how the person is killed or injured. The core argument states that being killed by an autonomous weapon system violates the essence of human dignity in case the machines have been delegated the decisions. It dehumanises armed conflict if there is no effective or functional human component and the decision of who and when to kill is handed to the machine. However, the argument has been rebutted by questioning the relevance of whether the decision to kill is ultimately made by a human or a machine as certain forms of automated killing exist. Should killing soldiers with landmines then face criticism on ethical grounds as a form of automated killing? This leaves us with the question of how much human input is exactly needed to leave dignity intact.
What Does International Law Say?
The way international law could accommodate the use of LAWS has sparked numerous debates. Critics quickly point out that international law remains too soft, complex, and context-dependent to be implemented on algorithms. Human Rights Watch raises doubts over whether the use of LAWS could comply with the principles of distinction and proportionality. However, important counter arguments have been raised. Whilst perfect compliance with international law may remain out of reach, compliance on a par with, or even better than human judgement, may be possible. On the other hand, some propose that autonomous weapons could be used in scenarios where a violation of international humanitarian law is impossible or simply no consequences would follow the violation. Nevertheless, future technological developments will determine the validity of these arguments.
Considering the security-related arguments, many of the ones pointed out are not related exclusively to LAWS but to automated and networked warfare in general. First, many point to the concerns that the use of LAWS more widely could lead to an arms race in technological terms. The use of LAWS could make warfare faster and more efficient but also more unpredictable at the same time. However, the faster decision cycles create pressure on the opponent to be on the same level with its own technology and automation efforts which can then quickly turn into a technological arms race. Second, the analysis generated by machines could lead to unintended military actions or escalations. Third, even if humans remain in control of the decision, the pre-processed data on which the decision is based will be highly influenced. The situational awareness that the human will have is depending on the pre-processed data. Fourth, automated systems are vulnerable to hacking.
While the use of LAWS has precision and reliability, the reduced or lack of human element in their use has sparked ethical, legal, and security-related controversies. As of now, the debate has generally divided groups into three main camps. Some favour a legally binding ban. Others do not see a current need for restrictions beyond what international law already regulates. In between the two camps are those who consider a non-binding declaration sufficient. However, they all share the opinion that there should be at least some form of human control over autonomous weapons. As the name “emerging” points to the recent, yet rapid, developments, these debates might still take some time to settle.