A.I., Automation, and the Military: We cannot stop the inevitable

Artificial intelligence and automation more generally are topics which creep up in the news cycle with increasing regularity, often promising either the onset of a new age of plenty and technological solutions to our most pressing problems, or, more sensationally, as the advent of a world of “killer robots” and various apocalyptic scenarios.  No one at this point seems to seriously doubt that AI will continue to develop, and that as it does so it will have a dizzying number of effects on our society and our world, many of which we are only just beginning to recognise.  Of course the upshot of this is that as our societies begin to incorporate AI more fully into them, we will also open ourselves up to a growing number of vulnerabilities (for a taste of this kind of thinking, see this sobering report).

As AI and its related technology has continued to develop, an increasing number of governments, militaries, think-tanks and the like have begun (or increasingly need) to seriously engage with how it will affect defence and security.  As with all new and promising technologies, there is no doubt that it will be at least partly captured for military developments, and, as the Campaign to Stop Killer Robots will tell you, such developments are already underway.  Most recently perhaps, we have seen Google decide not to renew its contract with the Pentagon to work on AI technology for military purposes, ostensibly following backlash from staff and a concerned public.  This is an interesting scenario, which demonstrates one of the key points regarding the future of military AI development, and most significantly, why it is probably not something we can hope to prevent.

For what is probably the first time in modern history, we are witnessing the development of a technology which is of huge potential significance to the military being led not by the government, or even a closely guarded defence sector, but by the private sector.  Google, Amazon, Facebook, and a few non-household names such as OpenAI, DeepMind, etc. all lead the field, themselves garnering public contracts and funding, rather than governments attempting to duplicate their research on what would undoubtedly be a much smaller scale.  Of course, this makes some kind of international agreement on the development or deployment of AI notoriously difficult.  Governments are not simply shutting down their own operations or enacting policy change – there are serious questions about how to regulate large international companies.  Anything other than unanimous agreement (typically a fleeting illusion in international politics) is unlikely to be adequate.  Governments are generally speaking extremely wary of regulating this relatively young and upcoming sector (or indeed, any sector in many cases) for fear of stifling innovation, or reducing economic growth and international competitiveness; as we have noted, the potential benefits of AI correctly handled are not disputed.  No government wants to potentially shut itself out of the next big innovation with all the prestige and economic and political advantages it could entail.  Moreover, the dual-use nature of much of the technological development going on in this sector means that even if overtly military uses (whatever that means) of AI were banned, it is probable that off-the-shelf solutions could simply be purchased by governments and modified to whatever ends they may desire later.  After all, the technology allowing a drone to follow a person while filming them is not a million miles away from what is needed to make it follow a target individual and detonate its hypothetical payload.  Indeed, we see this repurposing of commercial technology already happening with drones today.  This highlights another issue, in that being in the public domain, much of this technology is more likely to proliferate, especially given the open source development that characterises a lot of work on AI.  One need only look at the expanding number of companies and groups which are involved in AI and automation research and the geographies in which they are based.  Looking back at the development of the space law regime in the 60s and 70s, it was difficult but proved possible to get international agreements on the use of outer space when the only significant actors were the two superpowers.  Amending and updating that body of law in the wake of new political realities and technological developments has proven almost impossible given the explosion of new government and commercial actors, and that same desire not to handicap oneself with regulation given the potential benefits.

Then we have to face the fact that there is a clear military rationale for such technology.  Primarily of course, the ability to deploy autonomous machines may mean we get to some point where human beings do not need to be placed in harm’s way, freeing them up for other tasks or indeed, removing them entirely.  In this sense autonomous systems can be force multipliers, and could potentially mean financial savings against expensive human personnel, and can relieve the armed forces’ manpower problem more generally.  Autonomous systems can be designed to deal with the “dull, dirty, or dangerous” missions that any armed force must face more effectively than humans, and can be deployed to hostile arenas where human troops cannot.  Most fundamentally, autonomous systems can already process information faster than the human mind.  This theoretically also makes them capable of reacting and making decisions faster than a human soldier, which could provide a decisive edge in combat.

At this point it is important to look at the oft made distinction between autonomous and fully­-autonomous systems.  Autonomous systems are those that operate with a human “in/on the loop” to use the jargon.  This means that while a system may recognise certain patterns or events and give suggestions to a human operator, it is ultimately always a human that makes the decision to use force.  Fully-autonomous systems, as the name implies, are capable of operating fully independently, including target acquisition and the use of force.  It is these fully-autonomous systems which are generally being referred to in discussions of a ban on “killer robots”.  It is important to recognise however, that fully-autonomous systems do exist and are operational, though they are entirely of a defensive nature such as the Phalanx anti-missile system which is notably deployed by the US Navy.  These systems are designed to operate in a situation of incoming missile fire, where a human operator may have as little as 8 seconds to recognise a threat and respond, and generally perform much better.  Western nations have adopted the stance that they will never deploy fully autonomous weapons systems, but often argue a ban is misplaced, again fearful of shutting themselves out of a possible revolution and concerned that actors less constrained by legal instruments could ignore a ban anyway.  As with many things of this nature, it always expedient to have the option of using fully-autonomous systems, even if one doesn’t plan on deploying them first.  While at the moment the main arguments for always keeping humans involved in decision making are ethical and moral in nature, this calculation could quickly change if an opponent were to deploy fully-autonomous systems against us, especially if they were more effective and efficient than our own semi-autonomous systems.  It may even be within the realm of possibility that it becomes an ethical obligation to deploy fully-autonomous systems if they are developed to a point where they can be proven to make better decisions than human soldiers on average, especially in situations where civilians are involved.

Given all this, it seems inevitable that AI and automation will continue to proliferate, and will continue to be looked at and developed as a military tool.  Likewise, any prospect for a ban on its development or deployment seems small, and such an instrument is unlikely to be effective anyway.  Technology develops so rapidly in our current time that legislating on it is difficult enough even in the domestic environment, and the international nuclear non-proliferation regime, while it has contributed to a slow-down in nuclear development, is evidently not perfect.  Is there then, any way that we can attempt to control the development of this technology, at least so as not to create another arms race, or to bring some predictability and consensus to the fore?

One thought on “A.I., Automation, and the Military: We cannot stop the inevitable

Leave a comment