Why building killer robots might not be such a bad idea

The term "killer robots" broadly refers to any theoretical technology that can deliberately use lethal force against human targets without explicit human authorisation. The technology is hotly debated, even though it doesn't actually exist yet.

Advertisement

Unlike a drone, which will wait for human commands, or for its controller to "pull the trigger," "killer robots" may be programmed to engage anyone it identifies as a lawful target without seeking human confirmation before a kill.

the daleks bbc killer robots doctor who
"Killer robots" are just the start, William Boothby says. Just wait until humanity starts melding minds with machines, Dalek-style. That's when things get REALLY interesting. BBC

Because of the risks "killer robots," technically known as Lethal Autonomous Weapons, or LAWs, could pose, NGOs, experts, and pressure groups are lobbying for LAWs to be preemptively banned before they can be created.

In an open letter, 116 international experts in artificial intelligence, including Tesla's Elon Musk, Google DeepMind cofounder Mustafa Suleyman, and Universal Robotics CTO Esben Østergaard have just called for action to be taken before its too late.

"Lethal autonomous weapons threaten to become the third revolution in warfare," the letter said. "These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close."

Advertisement

"Allowing life or death decisions to be made by machines crosses a fundamental moral line," argues the Campaign to Ban Killer Robots.

But there are also strong arguments in favour of developing LAWs, from a potential reduction in human casualties to increased accountability. They could also be used on the physical battlefield and in cyberspace.

William Boothby, a former lawyer and Air Commodore in the RAF, has contributed to pioneering research on the subject of LAWs, and holds a doctorate in international law. Business Insider spoke to him last year to get his perspective of why "killer robots," in some circumstances, aren't actually such a bad idea.

“You don’t get emotion. You don’t get anger. You don’t get revenge. You don’t get panic and confusion. You don’t get perversity,” Boothby says.

Advertisement

And that’s just the start.

This interview was originally published in June 2016. It has been edited for length and clarity.

Advertisement

Autonomous weapons could save civilian lives — and we’re closer to them than you might expect.

terminator killer robots
The Terminator is the classic cultural depiction of a "killer robot" — but the reality is more nuanced. The Terminator

Rob Price: Do you support the development of lethal autonomous weapons — and in what circumstances?

Dr. William Boothby: Well, I wouldn’t put it in those blunt terms. I support the research and the development of the technology, with a view to achieving autonomous systems which are able to operate at least on a more reliable basis than human beings.

I recognise that there are in existence certain technologies already, such as Iron Dome [an automated Israeli missile defence system] and Phalanx [a naval defensive weapons system], where what you have essentially is a system that works autonomously when certain events occur.

But there is a distinction between “point defence” and what you could call an offensive system — the latter being a system which goes out and seeks its own target, as opposed to one like Iron Dome that is there to wait until rockets are inbound and then take them out.

The distinction is based on the notion that if you’re engaged in point defence, and if you have programmed the system appropriately so that it only reacts to what would be legitimate threats — i.e. rockets but not airliners, then there ought not to be a problem.

However, the minute that we’re talking about something going out on the offensive for objects to attack, then we are talking about something that is rather more problematic — because all of those complications within targeting law come into play in a way that they don’t necessarily when you’re dealing with point defence.

Price: So what are the most compelling arguments for using autonomous weaponry in an offensive capacity?

Boothby: I think that if you’re looking into the future, the only way you can interpret arguments for and against is by looking at the potential nature of the future battlespace.

I am clear in my own mind that autonomy in the future will gradually emerge in all environments — in the land environment, in the air environment, in the surface and subsurface sea in the environment, and in cyberspace and outer space.

Increasingly, you are going to see the human beings as the weakest link in the operation of both offensive and defensive systems, and the problem is that potentially you’re going to be in a situation where speed is going to be the challenge — rendering autonomy as essential.

Or, you are talking in terms of such a mass of a threat that the human being is going to be the weakest link because they just can’t compute in relation to scale, scope, and extent of the inbound threat.

Secondly, any discussion about autonomy in isolation is nonsense.

One has to talk about autonomy in terms of what it is being developed in order to counter, and if you have a situation in which, for instance, the threat is never going to be prohibited, what on earth is the justification for prohibiting the only possible way of responding?

This is all in very vague and theoretical terms. so here is an example:

Imagine a soldier has been given the job of clearing a row of houses with his patrol.

They haven’t a clue whether there’s terrorists in those houses, or peaceful families. They’re going down a brightly sun-lit street going from one house to the next and as this soldier goes into one particular building, he’s terrified. He goes from the light into the darkness. And in the darkness he detects movement. And in terror he empties his gun inside that particular room and kills all the occupants.

And it’s only afterwards that it’s worked out that the movement was that of a baby.

Yet, imagine the possibility of designing the type of technology where the machine would be capable of going inside the building and would have sensors that are able to distinguish between the movement of a large metallic object like a weapon and something lacking that metallic content — and would potentially be in a position to save those lives.

So, what is it that machines have that human beings don’t? Clearly, you don’t get emotion. You don’t get anger. You don’t get revenge. You don’t get panic and confusion. You don’t get perversity, in the sense that machinery won’t go rogue.

However, because the machinery has been made by human beings you do get fallibility.

Advertisement

There is currently no international law that specifically applies to autonomous weapons.

gort killer robot the day the earth stood still
LAWs are uncharted legal territory. Pressure groups are lobbying for a ban, because there's not currently any law that prohibits them. The Day The Earth Stood Still

Price: What’s the current legal status of autonomous weapons?

Boothby: The international law that applies to autonomous weapon technologies is exactly the same international law that applies to any other weapon technology.

There are basic principles that apply to all states, and specific rules about particular technologies.

There is a prohibition on the use of any weapon system that is of a nature to cause any unnecessary injury or suffering for which there is no corresponding military purpose. Adding an irritant to a bullet so that in addition to inflicting the kinetic injury it would also cause an additional irritant suffering effect for which there is no corresponding military purpose. That’s rule number one. It applies to all states and all weapons.

Rule number two is it is prohibited to use all weapons that are indiscriminate by nature i.e. which you can’t direct at a particular target, or the effect of which you cannot reasonably limit to the chosen target.

Thirdly, it is prohibited to use weapons which have prohibited damaging effects on the natural environment.

There are no specific rules dealing with autonomy. But the autonomous weapon system may use a particular injuring or damaging technology which itself may be the subject of a specific provision.

For example, an autonomous mine system, if it’s an anti-personnel mine, would be prohibited in states that are party to the Ottawa convention. If it’s not, there are lots of other treaties that have technical provisions dealing with vehicle mines.

So, if you were wanting to talk about the autonomous nature of the thing specifically, then there is no ad-hoc legal provision dealing with autonomy.

It doesn’t stop there. The issue is this: In the hands of its user, a weapon is that user’s tool that they use as an instrument to cause damage.

Once you’re discussing a weapon that is autonomous, you are talking about something where it isn’t the individual who is deciding what specifically is to be targeted but the weapon itself. Therefore, that brings in the law that relates to targeting.

The question then becomes whether the autonomous weapon system is capable of being used in accordance with targeting law rules that would normally be implemented by a human being.

There are some elements of the targeting law rules that autonomous weapon technology will be capable of addressing because, for example, the weapon system can be designed specifically to recognise an object that constitutes a military objective i.e. a lawful target.

Targeting law also requires an attacker to consider whether a planned attack would be indiscriminate.

When you are thinking about that sort of evaluative decision making, at the moment, autonomous technology would not be capable of doing that. There may, however, be circumstances where an autonomous weapon system can be used legitimately at the moment.

For example, imagine that you were undertaking military operations in areas of desert, or areas of remote open ocean. You may know because of patterns of life and surveillance that you’ve done, what you would expect the sensors to see — and you could simply program the weapon system not to attack if the sensors see anything other than that which is expected.

But the minute that you move down the scale to more congested, urban targeting environments, the more difficult it will be to justify the use of current autonomous technologies.

Advertisement

Killer robots get to the heart of the question: “What is the nature of warfare?”

metropolis robot woman fritz lang
If an autonomous weapon is capable of learning its own lessons, then it could have unforeseen consequences. Metropolis

Price: Do you think autonomous weaponry could make warfare safer and more accountable?

Boothby: I think that there is that possibility — if technology develops appropriately in that direction and if these new systems are only deployed when they have been improved and tested appropriately and used responsibly. There is the potential for civilian casualties to be reduced somewhat by the use of autonomous weapons systems.

But the argument by some is the other way. The argument is that once you’ve got machines and the grotesque warfare consisting of machine versus machine, without too much human involvement, involving one’s self in such warfare actually becomes that much easier.

I would think that there’s a fairly significant element of the ethical about this, in the sense that you would have to ask yourself at some point in the future 'what is the nature of warfare? What is warfare? What is it all about?' Is it all about machine versus machine? You’ll hear the argument that ‘I am prepared to take my chances in warfare but I do not accept being killed by the decision of a machine.’ Then you’ll hear others turning around and saying ‘I don’t want to be killed whether it’s by human or machine.’ I think it is very difficult to know how the ethical side is going to play.

I think there’s a tendency of people to look at technology as it is now and look into the future and say is that technology acceptable? I would ask myself whether there is merit in going in the reverse direction.

Imagine ourselves in a situation in which we have developed machine versus machine warfare and we have all become used to it. How acceptable would it be to go back to the arrangements that we had previously?

You don’t get that being discussed often in those terms, because people don’t seem to think in that way. There’s a tendency of human beings to think in a single direction when sometimes it’s useful to think in reverse.

Of course, anyone who is talking about machine warfare as no-casualty warfare is in cloud cuckoo land Let’s be honest, there’s always going to be victims and it is always going to be a tragedy.

Price: Is there a risk that that autonomous weaponry could encourage more destructive wars when soldiers’ lives aren’t at stake?

Boothby: There’s all sorts of possibilities, and that’s one of them. And then there's also the worry about what happens when autonomous technology gets in the hands of non-state actors.

So yes, maybe is my answer to this one. There’s a lot of speculation about some of these questions.

We delude ourselves if we look at one particular type of tech in isolation. I think we need increasingly to recognise that at the same time that autonomous technologies are being developed, other technologies are being developed as well — notably cyber.

And the minute you start thinking of autonomous technologies, you should then start worrying, or thinking, about the potential for cyber techniques potentially to be used to get inside an enemy’s autonomous weapons system, and either take them over or distort the way they make their decision making, or whatever.

Equally, there are other challenges. A lot of autonomy is going to be based on the use of artificial intelligence. It’s going to be what I described in the second edition of my book, "Weapons and the Law of Armed Conflict," as artificial learning intelligence (ALI) as opposed to artificial intelligence simpliciter as it were.

What we’re talking about is the ability of a machine to learn lessons, and learn its own lessons — not necessarily the lessons it’s been told to learn.

So then you get into the question of, right, it may be learning lessons other than the ones you told it to learn, but  have you told it which lessons it musn’t learn, and have you thought through which lessons it aught not to learn, and why, and checked that the system you’re deploying is going to be safe from that perspective?

Advertisement

We’re still grappling with issues of accountability and morality.

roberto futurama killer robot knife attack stab crazy
"You’ve got a potential difficulty over whether the commander understands the nature of the tool he’s been given and it’s limitations and possibilities," Boothby says, "but then frankly you’ve got that with any high-tech weaponry." Fox Broadcasting Company

Price: How do you deal with the issue of accountability when a machine has committed wrongdoing? Some people make the argument that it will make it harder to define who’s accountable.

Boothby: I’m in the camp that says: Yes, there may well be some issues of accountability, but I don’t think there will be as many accountability issues as people would have you believe.

I think, for instance, that somebody who develops an autonomous system, and configures the software in such a way that it is foreseeably going to attack civilians or civilian objects illegally, is going to be just as to blame as the person who alters the data inside the targeting software associated with a cruise missile with a view to it deliberately slamming into a block of civilian flats rather than a legitimate military target.

In that sense, command responsibility ought not to change. The responsibility of those setting up the system, the responsibility of those actually using the system, and so on and so forth, ought not to change. You’ve got a potential difficulty over whether the commander understands the nature of the tool he’s been given and its limitations and possibilities — but then frankly you’ve got that with any high-tech weaponry.

Here there’s the argument that this is different, that autonomy poses those risks in sharper focus, — well maybe that’s the case. My guess is you could overstate that.

Price: More fundamentally, even if you can simulate human judgment, isn’t having that human judgment a required characteristic? It’s a moral question: Shouldn’t we be on some level be unwilling to allow machines to take human life without human intervention?

Boothby: Well, that is the moral question, that’s the one you keep considering. So I agree with you, but it’s also a legal question, in the sense of the relative or evaluative aspects of targeting law that I mentioned to you earlier on.

It’s an evaluative process to determine, first of all, what the military benefit I’m going to gain by attacking this target; now secondly, what is the civilian loss and injury that I am expecting this attack on this target to cause; thirdly, what is the relationship between those two. In other words, is the civilian loss or injury or both excessive in relation to the anticipated military advantage?

Now, there’s a huge amount of evaluation which is peculiarly human in nature in that.

There is the requirement in targeting law to consider, before you mount a particular attack on a particular target, whether, by attacking a different target, you could achieve the same military advantage with reduced danger for civilians or civilian objects. Well are you going to develop a machine that’s going to do that? Or are you going to develop a methodology which is going to enable you to have that decision made in some other way independently of the particular machine? And then if, by making that decision, you decide the machine should not carry on doing what it’s doing, pull the plug, show it the red flag, show it the red card, tell it not to undertake that particular attack.

So this is where you’re talking in terms of potentially having a human being monitoring what the machine is doing. And this this the notion of meaningful human control, which is being discussed in the Conventional Weapons Convention deliberations taking place in Geneva this year.

Now the difficulty with that, of course, is — putting it somewhat flippantly — when is an autonomous system not an autonomous system? Answer: When it’s got meaningful human oversight.

Advertisement

What's unacceptable now might not be unacceptable forever ...

blade runner roy killer robot replicant android
If you have an autonomous dishwasher, an autonomous plane, and an autonomous television, are you going to care about autonomous weapons? Blade Runner

Price: What is the most persuasive argument you’ve heard again autonomous weaponry?

Boothby: I will tell you that the technology that causes me to pause for thought, and which I think could well constitute some sort of rubicon — human brain-machine interfaces. This is a very long-term project, and there’s a lot of work going on.

I think that's going to raise incredibly fundamental questions about what it is to be a human being.

We’re talking decades into the future, but nonetheless I think now is the time for us to think about these things. What is it to be a human being, what is it to be a weapon system, or an implement, if you like, or a tool in the hands of a human being? What is the distinction between them? How far should you be incorporating brain activity within the tool, and vice versa?

What is the strongest argument against? I think it’s in the hands of the ethicists, not in the hands of the lawyers.

I think the right way of approaching this is to look at a particular system as it is developed, and what it’s intended to do, how it's intended to do it, how reliably it does what it’s meant to do, what the risks are of it not doing what it’s meant to, and what the consequences of that would be, and so on and so forth, and to ask ourselves: Is that acceptable?

Now when you are deciding what is acceptable, that is in my view a time-variable decision.

In this sense that what was acceptable in the 1800s or the 1850s, and what was acceptable in the 1900s and the 1950s, and the 2000s and the 2050s, are not the same thing.

And one of the factors that may drive the acceptability of autonomy is going to be the extent to which autonomy features in our daily lives in other fields.

Imagine, that our washing machines and our televisions operate autonomously; our computer autonomously decides when we’re going to get up and what I’m going to do and eat to fit in with our diaries; that our planes fly autonomously and our trains operate autonomously.

The more we become comfortable with all of that, the question is: Is that going to affect how we view the acceptability of autonomous weapons systems? We might turn around and say: 'Well yes, of course we’re going to have autonomy in our weapons systems.'

Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.