ArabicChinese (Simplified)EnglishFrenchGermanItalianPortugueseRussianSpanish
Business

Killer robot AI is the future of warfare

The US military is stepping up its commitment to the development and deployment of autonomous weapons, an update to a Department of Defense policy confirms. Released on January 25, 2023, the update is the first in a decade to focus on autonomous artificial intelligence weapons. It follows a corresponding implementation plan released by NATO on October 13, 2022, which aims to maintain the alliance’s “technological lead” in so-called “killer robots”.

Both announcements reflect a crucial lesson militaries around the world have learned from recent combat operations in Ukraine and Nagorno-Karabakh: Armed artificial intelligence is the future of warfare.

“We know that commanders see military value in munitions lying around in Ukraine,” Richard Moyes, director of Article 36, a humanitarian organization focused on reducing gun damage, told me in an interview. A cross between a bomb and a drone, these weapons can hover for long periods of time while waiting for a target. Currently, such semi-autonomous missiles are generally operated with significant human control over key decisions, he said.

pressure of war

But as Ukraine’s casualties mount, so does the pressure to gain decisive advantages on the battlefield with fully autonomous weapons — robots that can select, hunt, and engage their targets all by themselves, without the need for human supervision.

This month, a major Russian manufacturer announced plans to develop a new combat version of its Marker reconnaissance robot, an unmanned ground vehicle, to bolster the existing armed forces in Ukraine. Fully autonomous drones are already being used to protect Ukrainian power plants from other drones. Wahid Nawabi, CEO of the US defense company that makes the semi-autonomous Switchblade drone, said the technology is already within reach to make these weapons fully autonomous.

Mykhailo Fedorov, Ukraine’s digital transformation minister, has argued that fully autonomous weapons are the “logical and inevitable next step” of war, recently saying soldiers could see them on the battlefield in the next six months.

Proponents of fully autonomous weapon systems argue that the technology will sideline soldiers by keeping them off the battlefield. They will also allow military decisions to be made at superhuman speed, allowing for radically improved defensive capabilities.

Currently, semi-autonomous weapons such as B. loitering munitions tracking and detonating themselves on targets, a “human in the loop”. They can recommend actions but require their operators to initiate them.

In contrast, fully autonomous drones, such as the so-called “drone fighters” now deployed in Ukraine, can track and disable incoming UAVs day and night without requiring operator intervention and faster than human-controlled weapon systems.

Call for a break

Critics like The Campaign to Stop Killer Robots have campaigned to ban research and development of autonomous weapons systems for more than a decade. They point to a future where autonomous weapon systems are designed specifically for people, not just vehicles, infrastructure, and other weapons. They argue that in times of war life and death decisions must remain in human hands. Leaving them at the mercy of an algorithm is the ultimate form of digital dehumanization.

Along with Human Rights Watch, The Campaign to Stop Killer Robots argues that autonomous weapon systems lack the human judgment needed to distinguish between civilians and legitimate military targets. They also lower the threshold to war by lowering perceived risks, and they undermine meaningful human control over what happens on the battlefield.

The organizations argue that the militaries that invest the most in autonomous weapons systems, including the US, Russia, China, South Korea and the European Union, are plunging the world into a costly and destabilizing new arms race. One consequence could be that this dangerous new technology falls into the hands of terrorists and others outside of government control.

The updated Department of Defense policy attempts to address some of the key concerns. It states that the US will deploy autonomous weapons systems with “reasonable human judgment about the use of force.” Human Rights Watch issued a statement saying that the new policy does not make clear what the term “reasonable measure” means or provide guidelines for who should decide.

But as Gregory Allen, an expert on national defense and international relations at the Center for Strategic and International Studies think tank, argues, this language establishes a lower threshold than what critics call for “meaningful human control.” The Defense Department’s wording, he points out, allows for the possibility that in certain cases, such as with surveillance aircraft, the level of human oversight deemed appropriate “may be little to none.”

The updated policy also includes language that promises the ethical use of autonomous weapon systems, notably by establishing an oversight system for the development and use of the technology and by requiring that the weapons be used in accordance with existing international laws of war. However, Moyes of Article 36 noted that international law does not currently provide an adequate framework for understanding, let alone regulating, the concept of arms autonomy.

For example, the current legal framework does not make it clear that commanders have a responsibility to understand what triggers the systems they use, or to limit the area and time in which those systems operate. “The danger is that there is no clear line between where we are now and where we have accepted the unacceptable,” Moyes said.

Impossible Balance?

The Pentagon’s update demonstrates a simultaneous commitment to the use of autonomous weapon systems and compliance with international humanitarian law. It remains to be seen how the USA will balance these obligations and whether such a balance is even possible.

The International Committee of the Red Cross, the guardian of international humanitarian law, insists that the legal obligations of commanders and operators “cannot be transposed to any machine, algorithm or weapon system”. Currently, humans are held responsible for protecting civilians and limiting combat damage by ensuring that the use of force is proportionate to military objectives.

If artificially intelligent weapons are deployed on the battlefield, who should be held responsible if civilians are unnecessarily killed? There is no clear answer to this very important question.

James Dawes is a Professor of English at Macalester College

Learn how to navigate and build trust in your organization with The Trust Factor, a weekly newsletter exploring what leaders need to succeed. Login here.

Related Articles

Back to top button
ArabicChinese (Simplified)EnglishFrenchGermanItalianPortugueseRussianSpanish