While the LAWS debate in Geneva was deeper and richer than previous discussions, key definitions – which are needed to word a protocol to restrict them – remain unclear and up for continued debate.
And with nations like the United Kingdom openly opposed to a ban, a protocol may end up being blocked entirely, much to to the chagrin of activists.
The British say existing international humanitarian law (IHL) is sufficient to regulate LAWS. While there was universal agreement among delegations that key IHL principles such as distinction, proportionality and precautions in attack apply to LAWS, there were sharp differences of opinion as to whether machines can be programmed to observe such distinctions.
The UK has taken the view that programming might in future represent an acceptable form of meaningful human control, and research into such possibilities should not be pre-emptively banned. In future, they might even reduce civilian casualties. The Czechs (a NATO ally) also expressed caution about a ban.
However, other nations repeated their calls for a ban, including Cuba and Ecuador.
Down with the robots
Still, for the Campaign to Stop Killer Robots, British opposition is surely a major concern. The UK has a veto on the UN Security Council. British allies such as Australia and the US might decline to support a ban. Battle lines have been drawn. Definitions will be critical.
Clearly the British will defend their national interest in drone technology. BAE’s Taranis – the long range stealth drone under development by UK multinational defense contractor BAE Systems – is a likely candidate for some sort of “state of the art” lethal autonomy.
Interestingly, BAE Systems is also on the consortium that is developing the F-35 Lightning II, widely said to be the last manned fighter the US will develop.
Sooner or later there will be a trial dogfight between the F-35 and Taranis. It will be the Air Force equivalent of Kasparov vs Deep Blue. In the long run, most analysts think air war will go the way of chess and become “unsurvivable” for human pilots.
Global Panorama/Flickr, CC BY-SA
Definitional issues
At the Geneva meeting, many nations and experts supported the idea of “meaningful human control” of LAWS, including Denmark and Maya Brehm, from the Geneva Academy of International Humanitarian Law and Human Rights. Although others, such as France and former British Former Air Commodore, W. H. Boothby, thought it too vague.
The Israelis noted that “even those who did choose to use the phrase ‘meaningful human control’, had different understandings of its meaning". Some say this means “human control or oversight of each targeting action in real-time”.
Others argue “the preset by a human of certain limitations on the way a lethal autonomous system would operate, may also amount to meaningful human control”.
It is perhaps a little disappointing that, after two meetings, basic definitions that would be needed to draft a Protocol VI of the Convention on Certain Conventional Weapons (CCW) to regulate or ban LAWS remain nebulous.
However, UN Special Rapporteur on extra-judicial, summary or arbitrary executions, Christoph Heyns, has been impressed by the speed and also the “creativity and vigour” that various bodies have brought to the discussions.
Most nations accept that “fully autonomous weapons” that could operate without “meaningful human control” are undesirable, though there is no agreement on the meaning of “autonomous” either.
Some states, such as Palestine and Pakistan, are happy to put drones in this category and move to ban their production, sale and use now. Others, such as Denmark and the Czech Republic, maintain that no LAWS yet exist.
This is another definitional problem. Paul Scharre’s presentation was a good summary of how we might break up autonomy into definable elements.
Future of war
Aside from the definitional debates there were interesting updates from experts in the field of artificial intelligence (AI) and robotics.
Face and gait recognition by AI, according to Stuart Russell, is now at “superhuman” levels. While he stressed this did not imply that robots could distinguish between combatant and civilian as yet, it is a step closer. Russell takes the view that “can robots comply with IHL?” is the wrong question. It is more relevant to ask what the consequence of a robotic arms race would be.
Patrick Lin made interesting observations on the ethical notion of human dignity in the context of LAWS. Even if LAWS could act in accordance with IHL, taking of human life by machines violates a right to dignity that may even be more fundamental to the right to life.
Jason Miller spoke on moral psychology and interface design. Morally irrelevant situational factors can seriously compromise human moral performance and judgement.
Michael Horowitz presented polling data showing that people in India and the United States were not necessarily firmly opposed to LAWS. Horowitz’s key finding was that context matters. What the LAWS is doing when cast in the pollster’s story is significant. How you frame the question makes a significant difference to the approval numbers your poll generates.
Overally, the meeting was a step forward in the debate around the status and legality of lethal autonomous weapons. Although that debate – and it implications on the future of warfare – is still far from settled.
By Sean Welsh, Doctoral Candidate in Robot Ethics at University of Canterbury. This article was originally published on The Conversation. Read the original article.
Comments