Killer Robots Will Make War Worse

What happens when "collateral damage" is up to an algorithm?

Imagine a missile swooping low out of the dark sky, and exploding against the roof of a house. The family inside are instantly killed. Next door, however, are not so lucky and piercing screams soon fill the night; villagers rush to the front door breaking it down, and begin to carry the badly injured people into the street. As more and more people begin to gather outside, an army of InsectdronesTM target the crowd, swarming one-to-a-person, and self-detonating on impact.

Now imagine this was carried out by robots, without any human control. Scary? Yes, and it could be our reality in the next decade.

Britain’s most senior soldier, the Chief of the Defence Staff General Nick Carter, said earlier this month that “robot soldiers could make up a quarter of the British Army by 2030”. This is not idle speculation: autonomous systems (in effect, robots) make up a non-trivial part of the Ministry of Defence’s budget proposal that is currently under consideration by No 10. He went on to suggest that “I suspect we can have an army of 120,000, of which 30,000 might be robots, who knows,” Carter told Sky News.

If there is one thing guaranteed to cause speculation and reams of newsprint, it is killer robots. If something is depicted in science fiction and then features in real life, of course people get excited. The classic image for killer robots, of course, is from the Terminator series of movies — there, I’ve done it, I’ve mentioned Terminator; now I can get on with the article — but it is highly unlikely that future military robots are going to be humanoid-shaped cyborgs. These are only mechanically efficient on rough and broken terrain for one: ‘killer robots’ are far more likely to be drone-like, or tracked (or swarms of both).

So, how long before InsectdronesTM are patrolling our streets?

It is worth stating upfront that warfare over the next 20 years will be a story of increasing autonomy. This is not just true among the British, with Carter — a known reformer — at the helm. It is true of British allies, and potential adversaries too. The US plans to spend $1.7bn on researching autonomous systems (drones, robots and the like) in the next financial year. It is much harder to tell what China is spending on its military (or what autonomous systems it has under development), but it is already extensively using unmanned systems such as aerial drones which have a degree of autonomy built into them.

The reason that the world’s militaries are investing so heavily in this domain is pretty obvious. Weapons systems without humans are much, much faster in attack and defence, helping you get inside your opponent’s decision-making cycle (the so-called OODA loop). They can also be much smaller and more robust, allowing them to get places that humans can’t. They don’t tire or need feeding, they don’t have morale problems. Finally, and particularly appealing to democracies: you can launch actions without risk of casualties. In strict military terms, autonomous systems are a no-brainer.

But the lack of casualties is where the ethical problems begin. Imagine you are a leader, who has to make a decision about whether to launch a military strike. You would consider the benefits of military success and the chances of achieving that success (e.g. enemy target destroyed), against the costs of the mission and potential for failure (e.g. casualties, captured personnel, or making your country/military look weak). Using autonomous systems removes most of the downsides, which lowers the threshold for action — and so you are more likely to launch more attacks.

We have arguably seen this pattern already with the rise of unmanned aerial vehicles, or drones, over the past two decades. And it is not a good thing, even for the progenitors of the attacks, because it encourages easy, consequence-free use of lethal violence, which means that we tend to end up treating the symptoms of violence (rather than the causes), and sometimes even becoming a cause of further violence through the ubiquity of our own violence. Scholars now argue, for example, that drone strikes along the Afghan-Pakistan corridor create rather than eradicate suicide bombers, generating feelings of shame and humiliation among the targeted communities. In not-so-many words, these unmanned systems encourage us to be tactical rather than strategic.

But there is actually a much more profound problem: our entire ethical and legal systems are built upon human intentions and judgements.

Think about murder, for instance. In order to prosecute someone successfully for murder, one has to prove that the accused had the intention to kill the victim (this concept — mens rea or intent — is actually technically necessary for all criminal prosecutions). So too with the Law of Armed Conflict, which rests on four overriding principles: military necessity, distinction (between targets), proportionality and avoiding unnecessary suffering. And in applying these concepts under the law, the judgement and intent of the soldiers and officers involved is taken into account.

How do we apply these concepts to autonomous systems?

Imagine that the attack on the village described earlier was carried out instead by a helicopter under human control. Depending on the circumstances, it could be argued that the action did not draw enough distinction between military and civilian targets, or perhaps it was a disproportionate use of firepower for a minor military target. Or perhaps the intelligence was wrong.

Let us now say that this incident became the subject of a court case. This is not unrealistic; in fact there are tens, if not hundreds, of cases being brought against the Ministry of Defence for the conduct of British soldiers in Iraq and Afghanistan. In a court case, we might expect the commanders in charge to be questioned about their views on whether the attack was proportionate, or whether appropriate care was taken to distinguish between military and civilian targets. This is also not unrealistic — when dropping bombs on targets where there is a risk of danger to friendly troops, the pilot will ask for the ground commander’s initials as his or her acceptance of increased risk.

In short, most militaries go to great lengths to avoid breaking the Laws of Armed Conflict, and a key way of doing this is making one person responsible for the use of lethal force so that their judgment and intentions are on the line. And, fairly obviously, you cannot do this with autonomous systems. Who do we blame if it goes wrong? The “commander” of the robot? The robot itself? The person who wrote the algorithm? If innocent villagers die, who do we blame?

Going back to General Nick’s announcement, it was not made clear what roles these robots might play in the future British Army. Or whether, if they were able to deploy lethal force, humans would be kept ‘in-the-loop’ (i.e. humans pull the trigger – like a drone), humans would be kept ‘on-the-loop’ (i.e. humans can stop the robot pulling the trigger – like a heat-seeking missile once fired), or humans would be ‘out-of-the-loop’ (i.e. a fully autonomous system with no human oversight). This is probably because he doesn’t know.

These very real legal and ethical challenges are top of the in-tray for many countries: at the end of 2019 the US military issued guidance on the use of lethal autonomous systems that was notable for asking the US Congress to help find a way through the minefield, and explicitly putting the subject of arms control on the table for discussion (something I have written about here and here with other types of weapons).

The real problem with autonomous weapons systems comes when you put strict military expediency in the short term up against complex legal and ethical challenges that may only come to fruition in the longer term. What if, when facing a near-peer competitor, a country is attacked with fully autonomous systems — with all of the military advantages outlined above. Weapons systems with humans in the loop would be quickly overwhelmed. One can imagine in those circumstances of survival that the current positions of the UK and the US (and many other countries) — that humans should always exercise oversight, authority and judgement over lethal autonomous systems — might crumble rather rapidly.

This, rightly, terrifies everyone: warfare is a human endeavour, and it requires humanity in its prosecution. Without that, we all lose.

Source: UnHerd

6 Comments
  1. ke4ram says

    Wars are the result of People not holding their governments accountable. I suggest people look at the following site,,, Deagle. A military oriented site that shows a USA population down to 99 million, a 70% loss in 2025. Many of the NATO nations down 20-50% as well.

    This could be due to the vaccine but I believe it is war,,, and by the looks of things the West loses horribly!

    https://www.deagel.com/forecast

    1. Saint Jimmy (Russian American) says

      All sides would lose another world war. Ordinary people would lose.

  2. Frank frivilous says

    If you were paying attention to the brief war between Azerbaijan and Armenia you will know that Drone warfare won cheaply and easily against formidable conventional forces. Combine that with the drone war that the U.S. and Israel has been waging against various insurrections recently and you will understand what the future holds anyone that opposes the “corporatocracy”. We need to start investigating methods to overcome this type of warfare without compromising our own systems, ie: EMP attack, etc.

  3. Undecider says

    Since as with the USA, the British military is going full Libtard, they’re going to need robots to make up for their weaknesses.

  4. BADGER BADGERISM (GRANDWORLDDR says

    start handing out EMP GUNS

  5. bob says

    war is war it can’t get any worse

Leave A Reply

Your email address will not be published.

Anti-Empire