Flying killer robots? Drones will soon decide who to kill

APD NEWS

text

Once complete, next-gen drones will represent the ultimate militarization of artificial intelligence. They will also ignite a vast legal and ethical debate

Science fiction is fast becoming science fact: The US Army recently announced that it is developing the first drones that can spot and target vehicles and people using artificial intelligence (AI).

This is a huge step forward. Whereas current military drones are still controlled by people, this new technology will decide who to kill with almost no human involvement. This will grant machines power over human life.

Once complete, these drones will represent the ultimate militarisation of AI and trigger vast legal and ethical implications for wider society to debate. Could warfare shift from fighting to extermination – losing any semblance of humanity in the process? Could the sphere of warfare widen so that the companies, engineers and scientists building AI become valid targets?

From human operators to airborne ‘Terminators’

Existing lethal military drones like the MQ-9 Reaper are carefully controlled and piloted via satellite. If a pilot drops a bomb or fires a missile, a human sensor operator actively guides it onto the chosen target using a laser.

Ultimately, the crew has the final ethical, legal and operational responsibility for killing designated human targets. As one Reaper operator states: “I am very much of the mindset that I would allow an insurgent, however important a target, to get away rather than take a risky shot that might kill civilians.”

Even in this age of drone killings, human emotions, judgments and ethics remain at the center of war – as they always have. The existence of mental trauma and post-traumatic stress disorder (PTSD) among drone operators shows that even remote killing takes a psychological toll.

This points to one possible military and ethical argument made by Ronald Arkin, in support of autonomous killing drones: Perhaps if these drones drop the bombs, psychological problems among crew members can be avoided. The weakness in this argument is that you don’t have to be responsible for killing to be traumatized by it. Intelligence specialists and other military personnel regularly analyze graphic footage from drone strikes. Research shows that it is possible to suffer psychological harm by frequently viewing images of extreme violence.

An MQ-9 Reaper. Photo: US Air Force

When I interviewed over 100 Reaper crew members for an upcoming book, every person I spoke to who conducted lethal drone strikes believed that, ultimately, it should be a human who pulls the final trigger. Take out the human and you also take out the humanity of the decision to kill.

Grave consequences

The prospect of totally autonomous drones would radically alter the complex processes and decisions behind military killings. But legal and ethical responsibility does not somehow just disappear if you remove human oversight. Instead, responsibility will increasingly fall on other people, including artificial intelligence scientists.

The legal implications of these developments are already becoming evident. Under current international humanitarian law, “dual-use” facilities – those which develop products for both civilian and military application – can be attacked under certain circumstances. For example, in the 1999 Kosovo War, the Pancevo oil refinery was attacked becatanks as well as fuel civilian cars.

Google’s New York headquarters. Photo: Scott Roy Atwood, CC BY-SA

With an autonomous drone weapon system, certain lines of computer code would almost certainly be classed as dual-use. Companies like Google, its employees or its systems, could become liable to attack from an enemy state. For example, if Google’s Project Maven image recognition AI software is incorporated into an American military autonomous drone, Google could find itself implicated in the drone “killing” business, as might every other civiliause it could fuel Yugoslav n contributor to such lethal autonomous systems.

Ethically, there are still darker issues.

The whole point of self-learning algorithms – programs that independently learn from whatever data they can collect – is that the related machines become better at whatever task they are given. If a lethal autonomous drone is to get better at its job through self-learning, someone will need to decide on an acceptable stage of development – how much it still has to learn – at which it can be deployed. In militarised machine learning, that means political, military and industry leaders will have to specify how many civilian deaths count as acceptable as the technology is refined.

Recent experiences of autonomous AI in society should serve as a warning. Uber and Tesla’s fatal experiments with self-driving cars suggest it is pretty much guaranteed that there will be unintended deaths at the “hands” of autonomous drone as their computer bugs are ironed out.

And there is an apocalyptic scenario. If machines are left to decide who dies – especially on a grand scale – then extermination could result. Any government or military that unleashed such forces would violate whatever values it claimed to be defending.

In comparison, a drone pilot wrestling with a “kill or no kill” decision injects humanity into the inhuman business of war.

(THE CONVERSATION)