May 31, 2023

14 mins read

What the military needs to learn from the commercial AI sector’s mistakes

What the military needs to learn from the commercial AI sector’s mistakes

Regulating AI is necessary sooner rather than later. Major actors worldwide are taking actions, but one domain is still something of a figurative minefield: military AI.

To ban or not to ban?

A popular position in the West maintained by many NGOs, academics and civilians alike is that the development of military AI is alarming or threatens safe, democratic and moral society. In particular but not exclusively, a large part of the discussion concerns Lethal Autonomous Weapons (LAWs) - weapons that can operate and “engage” with humans without needing any or only very limited human input. Many are calling for outright bans, some even on researching the technology out of fear of potential misuse and consequences. Other military AI systems are also not exempt from this outcry: weapons becoming ever “smarter”, fully autonomous or not, has many worried that weapons of mass destruction might soon be commonplace in war and easily obtainable by any despot, terrorist or other actor with nefarious intent. Thus, the outcry for bans or limits are rising (Russell, 2023).

Another perspective is that the consequences of not developing military AI would be cataclysmic. Because of the great advantage that such systems offer to those who possess them, and the inherent threat to those who do not, it’ll be imperative to develop military AI at the very least as a deterrent, if not as an active defensive tool. The current Russian invasion has given new wind to this stance, as it has once again brought war to the forefront of Western minds. Thus, the central argument goes that, as we cannot expect that no-one will, truly, never develop dangerous military AI, one must stay ahead of the curve as much as possible. This often goes hand in hand with statements about the threat of a particular nations’ supremacy, threat to world order, or similar concerns. These two opposing sides have materialised during this debate: those who would rather never open Pandora’s box and those who would want to open it as fast as possible to prepare for what’s to come.

Both have received their fair share of criticism: we indeed cannot expect that all actors will agree to stop the development of military AI, especially since, contrary to nuclear weapons, lethal systems are achievable to some extent by anyone with AI knowledge and a drone. On the other hand, an AI arms race to “stay ahead of the curve” will have dire consequences for human lives and continuously raise the stakes of warfare. While there’s a persistent and popular claim that LAWs could distinguish soldiers and civilians and thus help save innocent lives, the veracity of such claims are dubious, even more so since the technology is still in many ways uncertain and not trustworthy. Furthermore, an arms race risks to only further widen the gap between developed and developing nations, and in general, AI systems deployed prematurely and irresponsibly will ultimately be to the detriment of citizens and innocents worldwide.

Current state of negotiations

These lines of argument have currently been going back and forth for years. Indeed, autonomous weapons negotiations (UN’s Convention on Certain Conventional Weapons or CCW) had been going on in Geneva but are, for now, at a standstill. The recent conference of Responsible AI in the Military (REAIM) marked an important step towards global discussion about what should be done to regulate AI, but also served to differentiate the two stances mentioned above. The world currently lacks anything beyond international humanitarian law for addressing AI-assisted military tools or LAWs.

There’s still a long way to go to, and there’s a concern about time running out. We run the risk of failing to properly address such issues while arguing from opposite sides of this spectrum (e.g. ban military AI or develop it as much as possible). The longer the situation does not develop, the likelier it becomes that a conflict may arise and that during such a conflict, AI systems would be prematurely deployed due to the necessity of the situation and lack of progress towards global agreements. Consequences of this could be addressed, mitigated or perhaps entirely avoided if we resolve these discussions sooner, rather than later. It is necessary to set norms now and decide on an approach to military AI as soon as possible.

As I see it, despite diplomacy being the ideal method of conflict resolution, it seems unlikely that militaries around the world will back down from maintaining and improving their capacities and abilities - AI is too great of a tool to not be used in military applications. While many countries are ardently in favour for banning LAWs at the very least, while leaving other AI applications to the discretion of the state in question, others have opted for interesting approaches to tiptoe around the issue on LAWS:

  • Germany for instance, has said they will only consider weapons to be LAWs if a weapon has “the ability to learn and develop self-awareness” - the first bit is already a given with some AI systems, but self-awareness is so frustratingly vague that there’s enough wiggle room to do nearly anything.
  • China, on the other hand, has stated that as soon as a system is able to distinguish between civilian and military targets it no longer counts as autonomous and should thus also not be banned. Many systems will on the surface seem like they can already do this, but it’s highly unlikely that they would ever work fully according to such moral intentions in practice.

(As a short tangent: AI systems like this might for example target individuals in uniform or military equipment, but this creates certain easy workarounds to for example use civilian clothes to escape detection. Alternatively, soldiers could wear so called adversarial patches, patterns which confuse object recognition systems to detect a giraffe instead of a person. This creates vulnerabilities, but also spur on changes to the AI system circumventing the “do not fire at civilians”, which defeats the purpose entirely. Such workarounds would in practice not be as thoroughly tested as they should be, so that a change to the system to “fire at civilians if they have a weapon” might detect anything from a stick to a backpack as a weapon.)

Commercial sector: bad habits and excuses

Both of these stances on LAWS reflect at best confusion about AI. At worst, it’s a sign of what philosophers have referred to as “ethics washing” - making public statements that sound as if the actor in question is responsible and trustworthy while not actually needing or planning to comply with said statements, or compliance is in some other way redundant. This is something that has been widely seen in the commercial sector of AI, whence the term was first coined. Google, Facebook and other tech giants have for years been publishing and applauding their own ethical guidelines for responsible use of AI while simultaneously not heeding ethical or moral concerns. For example, many big tech companies established ethics boards to inform their way of working, only to shortly after disband them quietly or fire employees working there for dubious reasons (see the scandal of Google firing ethics researchers Timnit Gebru and Margaret Mitchell).

The main danger, in my opinion, is that AI will, in the event of conflict, see much of the same patterns of application in the military domain as it has in many commercial domains - irresponsible over-application without sufficient consideration of consequences. Due to the hype around AI and the fact that it can be deployed to solve, at least theoretically, such a wide range of problems, I fear that militaries will, just as commercial agents have done, bite the hook and join the “race to the bottom” with AI.

This is already happening to some extent: Germany and France have proposed the CCW agree to a non-legally binding political declaration, rather than a legally binding instrument - this is exactly what many tech giants have and continue arguing for instead of binding AI regulation, despite the sorely apparent need for it. Russia, the United States and South Korea have opposed proposals to negotiate on LAWs calling such a move “premature”, much in line with how AI tech companies first reacted to the proposals of developing AI regulation. Of interesting note, China has opted for a new approach, calling for the ban for use but not production of autonomous AI.

The fact that governments are falling into the same traps as companies is worrisome, and the lack of regulation on AI in the commercial sector has enabled the use of some truly questionable algorithms with disastrous results for the people they affect. A recent notable example:

  • Allegheny Family Screening Tool is/was an algorithm meant to flag families in the US that needed to be investigated by social workers for potential child abuse or neglect. It has instead separated families for very questionable and unclear reasons. For instance, one family brought their daughter to a hospital because she was refusing to eat, after which they were flagged as “potentially abusive” by the tool and their daughter taken away from them with no path for the parents to contest the algorithm’s decision or receive a clear explanation for why they were flagged (the most likely explanation seems to be that one parent had had a stroke and the other was diagnosed with ADHD, and the tool has been known to discriminate against people with disabilities). The algorithm has displayed worrisome bias not only against disability, but also against minority races and financial situation. If AI can go so wrong in the civilian sector, it bodes ill for the military sector as well.

Regulation as the solution

As companies have preferred to have free rein on AI and avoid regulation, so will militaries most likely as well despite the evidence of some disastrous consequences we’ve seen in the commercial sector. Momentarily setting the issue of banning LAWs aside, irresponsible development of all military AI systems is likely to have dire consequences from an ethical perspective. However, for those practically minded, I’d also like to argue that this is harmful for the actual usefulness of the systems themselves.

While “all is fair in (love and) war” has been a durable quote for good reason, developing AI military systems with the same mindset as Facebook’s “go fast and break things” would be disastrous for citizens, but also notably, also the military. Why is this? Quite simply, trustworthy and ethical AI is a better business choice for the very usefulness of the systems. The following are all aspects of AI tightly linked to trustworthiness, which are usually neglected (especially in the commercial sector) but definitely should not be in the military sector, if not for ethical reasons, then for practical ones:

  • Explainability and Transparency: explainability is one of those things that are often talked about by academics in AI, but that practitioners in many ways could not care less about, except if their clients demand it. Fortunately for us, military AI would be very unhandy were it not explainable.
    • Transparency makes it much easier to improve a system, to trust it or know when to not trust it - for instance, if a system used for spotting potential human movement over a surveilled area is known to not work well under some conditions (let’s say poor weather for a simple example), it is much better that a human decision can be made to increase human-observers until the weather clears up.
    • In a military context, if a system is not understood, it becomes increasingly less likely to be obeyed, even if it is correct. As an example, say there’s a system that recommends a driving route through difficult terrain to avoid obstacles or other dangers. If the route planner suddenly instructs taking a big detour around an area that seems completely safe to humans using it, it is likely that at some point, the operators will choose to falsely disregard the instructions.
    • Lack of explanations makes a system seem unreliable and unpredictable - if this is a bad characteristic in a soldier, it is just the same for a tool.
    • The EU’s AI Act cites documentation, risk management and transparency as requirements for certain high-risk systems - these would be a highly helpful resource for making concrete requirements for military AI for improving transparency and explainability.
  • Frugality: involves lowering emissions and hardware usage. The need for frugality is inherent to military contexts, where one will need to prioritise light-weight, easy to deploy and resource efficient solutions. It can only be beneficial for military AI to need little power and little hardware to operate as this makes it faster and more versatile to deploy. Less resource consumption and hardware requirements will however put an additional environmentally minded requirement on military AI that interestingly is not required for commercial AI.
  • Independence: Independence involves sovereignty over Ai systems, in the sense that the editors or AI technicians cannot be present with each AI system to either interpret its results (notice that this ties back to the explainability requirement) nor fix it in case of malfunction. Something like a “subscription service” to military AI would be ludicrous and put tech companies in unacceptable positions of power - it’s crucial that the military AI systems, once in action, can operate independently from their creators. Losing access to a tool or a project because the original creator ends its activity or support would be disastrous. Thus, having systems that are capable of being independently deployed (and understood) by users (in many cases soldiers) will be of crucial import for any military that wants to widely deploy AI tools.
  • Ethical considerations: AI over- and under-trust of products influences how they’re used. Soldiers who do not trust that the AIs act “well” or reliably will not use them. It’s crucial to ensure that the systems do not operate with a large moral gap from the users. Even just from a commercial company perspective, having systems that consider the people at the other end of the field will ensure that the people building the solutions do a better job, improve job retention (for example, Google has been repeatedly hit by resignations due to controversial projects with the Pentagon, and AI godfather Geoffrey Hinton also resigned from Google due to concerns about AI), improve collaboration for all involved, ultimately enhancing the military AI solution.

Such technical requirements would in any case be necessary for having AI systems that work as intended and in a trustworthy manner in the military. Regulation will only help improve the quality of systems: just as Facebook’s slogan of “go fast and break things” didn’t stay appropriate forever, the same goes for countries’ approach to military AI. Thus, it’s imperative to develop at least some forms of regulation to avoid falling in the same trap that the commercial field struggled with until recently.

For the commercial sector, there’s luckily a light at the end of what was starting to become a very dark tunnel. Thanks to international efforts and calls for regulations on AI development, the need for safe-guards has become increasingly clear. The European Commission’s Artificial Intelligence Act is currently the first proposal for serious regulation of commercial AI systems (but it does not cover any military applications, unless they are dual-use and can have commercial uses swell, for example drones). It sets up a tier system based on the risks involved in the AI system and regulates each tier differently, setting requirements or bans depending on how that system’s impact. The AI Act contains many requirements that I would argue similar military conventions could adopt (for example cybersecurity and robustness to name a few), expanding those I outlined above. I have already written a piece detailing this elsewhere, so if interested, that can be found here.

In short, the way that AI development and use has progressed in the commercial field can serve as guidance on what to avoid or what to strive for in the military. It currently seems as if countries, just as companies until recently have been able to do, are content to follow non-binding treaties. Yet promises without consequences are not actually promises at all, and it’s important that states, just as companies, are accountable and do not shirk their responsibilities, as difficult as they are. I furthermore have argued that regulation, while something some want to avoid, would actually be beneficial in numerous ways in a military context. War is, by its nature, messy, and AI systems traditionally do not deal well with unpredictable environments. Developing AI systems which must uphold certain requirements will make them more reliable and thus trustworthy, and could, down the line, indeed perhaps also help avoid unnecessary deaths and breaches of international humanitarian law. Thus, it is absolutely imperative to strive for a balance between avoiding AI development and an AI arms race: and this is responsible regulation.

 

DecodeTech publishes opinions from a wide range of perspectives in hopes of promoting constructive debate about important topics.


Categories:

Opinions, Explained

Picture credits:

Dose Media