April 27, 2022

8 mins read

AI's impacts on societies and international relations

AI's impacts on societies and international relations

What are the economic and societal changes brought about by the use of AI? And how could these technologies disrupt the global balance of power?

From dream to reality

Hardly a week goes by without some new applications of artificial intelligence (AI) making the headlines. While these articles used to describe exciting usages of this family of technologies a couple of years ago, they now tend to focus on poor predictions, built-in biases and misuse of systems deployed in the private and public sectors alike. Growing attention is being paid to the sources of potential drawbacks of AI-based systems. The questions raised are not so much anymore about what we can do with AI, but rather what we are doing with it and for what purpose.

At the same time, a global competition is underway as international actors strive to position themselves at the forefront of AI innovation. This “race to AI” is all the more intense as these technologies have the potential to disrupt the distribution of power and control, affecting states, private companies, and individuals in various different forms. Their development and deployment bring unprecedented challenges within and between countries, resulting in potential upheaval on the international scene.

Transforming economies

AI is presented by many as having the power to revolutionize sectors, transform economic models, and change the way economic actors interact with each other. Yet, potential economic gains and losses remain difficult to predict as countries have economic specificities and AI is not affecting all industries at the same pace. If AI is like the industrial revolution in its transformative potentials, failing to adopt these technologies will have economic repercussions.

First, domestic labour markets could be affected by the adoption of AI-based systems. While the use of these technologies could lead to significant productivity gains (in sectors such as manufacturing and transport), they would most likely be deployed to replace manual and cognitive work, thereby diminishing intrinsic values of low and mid-skilled workers. Given the speed of technological changes, it is unclear how fast education systems could adapt to address resulting skills gaps between those benefiting (mostly high-skilled IT workers) and the rest of the workforce, which could lead to increased unemployment and inequality.

Then, companies that develop AI tools early on could have a first mover advantage: once productivity gains would have been achieved, front runners (mainly located in developed countries) would be able to gradually attract more and more of the profit pool of their industry and use these profits to stop their competitors (through price competition, acquisition, etc). This can have significant international consequences. Among other things, low developed countries could face increasing difficulties in maintaining their global competitiveness. Moreover, AI-based technologies could make it economical for some manufacturers to repatriate production from poorer countries, widening international inequality even further.

Challenging democracies

AI can additionally threaten foundational principles of democracy itself. Its apparent pervasiveness in both private and public hands is apt to breed confusion and concern. Whether used for the benefit of the people or against them, these technologies have the potential to influence what we do, and how and why we do it.

For starters, by collecting and crossing data, private and public actors are able to segment people into groups and to target different messages and contents based on personal attributes. These AI-enabled profiling systems steer us all towards specific products, services, views, and beliefs. They raise questions about our right to self-determination and could easily be used to deceptively manipulate population.

Second, because of their self-learning capabilities, AI technologies (such as facial recognition and social scoring) are often portrayed as a solution to security problems in the public sphere. Nonetheless, and in addition to privacy concerns and numerous ethical, moral, and political questions, simply deploying infrastructures designed to isolate individuals and track them over time and space increases the likelihood of domestic and foreign exploitation of these systems for detrimental purposes.

Reshaping public-private relationship

Another distinctive aspect of AI systems is that they are, for the most part, developed by private actors. The fact that a large majority of the AI expertise lies within companies thus significantly shifts the power relationship between public and private entities. This evolution pushes new kinds of public-private partnerships forward. The degree to which this development is beneficial to all citizens depends on a range of factors that vary widely from country to country.

As a matter of fact, big tech companies (mostly American) have collected enormous amount of data over the past 20 years. As already mentioned, this new type of raw material can now be used by these private actors to train AI systems and gain a serious competitive edge over competitors (and states). Likewise, thanks to direct and indirect network effects and to their massive profits, these companies attract and finance a large proportion of talents capable of mastering the most advance AI technologies. Accordingly, we are currently witnessing a centralization of power where few private actors control the entire value chain, from the production of semiconductors to the supply of digital products and services.

Simultaneously, the use of standards to support digital legislation is expanding and standards bodies are expected to play an important role in the international regulation of AI. Originally conceived as a forum for public and private stakeholders to set common standards and overcome technical barriers in international trade, these harmonisation bodies are increasingly expanding their scope beyond the implementation of mere technicalities. Given the “all-encompassing” nature of AI and the relative opacity of the standardization process, this new kind of public-private partnership raises serious questions of democratic accountability.

Disrupting the warfare

In the defence realm, functions that could be supported by AI-based systems are numerous and could bring about tremendous military evolutions. AI is depicted by some as having the potential to alter the immutable nature of war. Others acknowledge its potentials, but highlight primarily the psychological changes these technologies will have on strategic affairs. What is certain is that AI-based military applications have the potential to “reshuffle the deck” and allow middle powers to compete alongside bigger military players.

The first thing that comes to mind when discussing AI and armaments is probably lethal autonomous weapon systems. The use of these AI-based machines, having the power and discretion to take lives without human involvement, raises important political, legal and moral questions. Already in 2019, UN Secretary-General António Guterres was urging countries to take actions to ban these specific weapons. Yet, last year, a UN report suggested the use of autonomous Turkish drones to kill enemy troops in Libya.

In addition to fully-fledged weaponry, AI is increasingly being integrated into cyber operations. Whether through the use of direct cyber attacks on critical national assets or through the dissemination of fake news, cyber warfare and AI are intimately linked as they influence and fuel each other. These new threats are all the more dangerous as they can be relatively easily implemented (by public, private, legal, or illegal actors alike) and are very difficult to detect.

Where to go from here?

The stakes in the “race to AI” are often portrayed as an existential necessity to protect us from digital authoritarianism. As stressed by French President Emmanuel Macron in an interview in 2018,

“If you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution. That’s the condition of having a say in designing and defining the rules of AI.”

It is true that global power could result from the use of AI systems. That being said, by promoting and relying more and more on these technologies, we are not making an innocuous choice. We are embracing a distinctive political and philosophical stance. Taking part in the “AI revolution” has strong societal and economic consequences that will impact states, private companies, and individuals very differently.

AI is not a “neutral tool”. Questions of morality, fairness, accountability, competitiveness and sovereignty will continue to arise as AI-based technologies develop and are further deployed. The disruptive potentials of these technologies is real and could disturb (for better or worse) the global balance of power. Discussions on what the use of AI-based technologies entails at national and international level have only just begun, let's make sure they develop!

 

DecodeTech publishes opinions from a wide range of perspectives in hopes of promoting constructive debate about important topics.


The author works for the European Commission. The opinions presented in this article reflect the personal opinion of the author only and do not constitute an official position of the European Commission.


Categories:

Opinions, Explained

Picture credits:

Juliana Kozoski