Technical standards: from track gauge to ethics of AI
Technical standards and certification are becoming an increasingly used regulatory instrument. They are even strongly considered in the field of AI ethics.
The EU Single Market: relying on standards
Nowadays there is no business line without regulatory technical standards: standards for specifying the sizes of writing paper, for energy management, for scaffold width, for power cables, for aviation and space flight, for toys, for animal feed… the list is endless. And the practice is hardly a new development. The Roman Empire already imposed its standards on the conquered territories, especially regarding roads, construction and administration. The creation of a technical standard aims to harmonise a sector of activity and facilitate a universality of use. The most telling example of this is the gauge of train rails: in order for trains to be able to move as widely as possible, the rail spacing had to be standardised.
The European Single Market ensures free movement of goods, services, capital and persons or, in other words, the possibility to trade and do business freely. It is therefore logical that technical standards have become an integral part of the functioning of the European Single Market. European standards are adopted by the European standardisation organisations (ESOs), namely the European Committee for Standardisation (CEN), the European Committee for Electrotechnical Standardization (CENELEC) and the European Telecommunications Standards Institute (ETSI).
The “New Approach”
In order to ensure the free movement of products throughout all member states, the European Union (EU) adopted the so called “new approach” in the 1980’s, which consists in setting essential safety requirements for products put on the market. Products must meet these essential safety requirements in order to benefit from free circulation. These requirements are then clarified in a technical language by the European harmonized standards. Consequently, if your product is in conformity with the relevant European harmonized standard, it is considered as complying with the mandatory essential safety requirements and in that way can be sold throughout the EU without any trade barrier.
European standards have thus become the stars of the New Approach and the Single Market. The EU boasts the efficiency of its Standardisation System in an always more competitive business environment. To preserve Europe’s competitiveness and technological sovereignty, the EU relies even more on standards, especially in the digital area. The EU Commission has even the intention to set up an “EU excellence hub on standards” in order to step up the development of standards.
EU Digital Agendas: more standards wanted!
In parallel, the EU has adopted the first Digital Agenda for Europe in 2010 (which resulted in the adoption of the GDPR), the Digital Single Market Strategy for Europe in 2015, whose goals were to develop the digital goods and services and maximise the growth potential of the digital economy across Europe, and the second Digital Agenda for Europe in 2020. The latter addresses many issues, including a human-centric and trustworthy artificial intelligence (AI). In this regard, the EU then faces the challenge of reconciling two objectives: on the one hand boost the EU’s industrial capacity and on the other hand ensure an ethical and legal framework. In other words: staying competitive with the Chinese and American superpowers while ensuring fundamental rights.
In this framework, a significant place is once again given to technical standards. The Proposal for an AI Act (AIA) is an illustration of this. This text proposes a risk-based approach: the risk is either unacceptable and therefore prohibited, high and subject to a compliance regime, or low. The potential risk concerns health, security or fundamental rights. The proposal, which is an important part of the EU Digital Single Market Strategy, follows the same logic as the “New Approach”: the mandatory requirements for high-risk AI systems will be implemented in practice through harmonized technical standards.
What solutions for the ethical challenges of AI?
But let’s go back to the trustworthy and ethical AI. AI systems can be biased and lead to potential discriminations. For example, an automated decision can be biased by criteria discriminating a population category (such as people of color, women, or disabled people) or by the failure to take into account a contextual element specific to the concerned person. The challenge is then to ensure that AI systems respect human rights, especially the so-called “high-risk” systems. How exactly can a technical standard ensures that AI systems respect human rights?
The EU Commission started to dig into this question by mapping the various existing international standardisation initiatives on AI which are relevant to high-risk AI systems, with the goals of analysing their relation to the requirements of the AI Act Proposal and, in the long run, of providing a roadmap for implementing the AIA.
This EU study considered the International Organization for Standardization (ISO) and the Institute of Electrical and Electronics Engineers (IEEE) standards about AI. The IEEE is the largest technical professional organisation dedicated to the advancement of technology. In particular, it develops global standards in a wide range of sectors, including energy, the Internet of Things, robotics, AI systems, nanotechnology, etc. Common IEEE technical standards are for example the standards for WiFi or for the software development life cycle. IEEE has set itself the goal of finding global ethical solutions, despite the diversity of existing ethical traditions, placing human well-being as the metric for progress in the algorithmic age. To this end, the IEEE wrote the “Ethically Aligned Design” document and is currently creating the P7000 series of standards entitled IEEE Ethics in action in Autonomous and Intelligent Systems.
For example, a standard in the making is called “Algorithmic Bias Considerations” and proposes “benchmarking procedures and criteria for the selection of validation data sets for bias quality control; guidelines on establishing and communicating the application boundaries for which the algorithm has been designed and validated to guard against unintended consequences arising from out-of-bound application of algorithms; suggestions for user expectation management to mitigate bias due to incorrect interpretation of systems outputs by users (e.g. correlation vs. causation)”.
Operationalise ethics through technical standards?
Are we taking the path of operationalisation of ethical choices and human rights through technical standards elaborated by professional engineering associations? For example, can a technical standard set a methodology for taking certain ethical values into account when designing an AI system? This is what the IEEE produced with its 7000-2021 Standard Model Process for Addressing Ethical Concerns during System Design, which goal is to “enable organizations to design systems with explicit consideration of individual and societal ethical values, such as transparency, sustainability, privacy, fairness, and accountability, as well as values typically considered in system engineering, such as efficiency and effectiveness.” How the European standardisation organisations will consider these international standards on AI ethical issues? While the standardisation system regarding the safety of products such as toys is understandable, it seems less legitimate when it comes to the complex issue of AI ethics and fundamental rights. The development of any human rights rules must not be hidden under the guise of technical needs and be free from pressures of competitivity. If this path is followed, a minimum level of transparency in the stakeholders’ involvement must be guaranteed. To be continued!
Recommended citations:
- Arnaud Van Waeyenberge, La normalisation technique en Europe. L’empire (du droit) contre-attaque, Revue Internationale de Droit Économique – 2018 – pp. 305-318.
- Yves Poullet, Éthique et droits de l’homme dans notre société du numérique, 2020.
- Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, 2016.