The EU AI Act entered into force on 1 August this year, and the provisions for high-risk AI systems will start to apply after a transition period of two or three years.1 In line with other pieces of product legislation, technical standards will define concrete approaches to meet legal requirements in practice. Compliance with European harmonised standards, provided they are published in the Official Journal of the European Union, provides a legal presumption of conformity with the regulation. This way, standards support the establishment of a level playing field for the design and development of AI systems, in particular for small and medium-sized enterprises that develop AI solutions.
In May 2023, more than a year before the AI Act entered into force, the European Commission adopted a formal standardisation request, accepted by CEN-CENELEC. The drafting of these standards is underway, built on the consensus of a wide range of stakeholders. The standardisation request explicitly required measures to facilitate the participation of representatives from a wide range of sectors and organisations. Reaching a consensus on key topics has often proved challenging, and the process has been slower than anticipated even by standardisation stakeholders.
In this new Science for Policy brief, JRC researchers explain some of the factors unique to the standards needed for the AI Act. While the harmonised standards can draw on international standardisation activities from organisations such as ISO and IEC, there are elements in the context of the EU AI Act that are not captured in any of these activities.
One of the key differences is that standards for the AI Act must specifically address and prioritise the risks that AI could pose to the health, safety and fundamental rights of individuals. This can be considered a novel aspect for AI standardisation, as the focus of published and ongoing international standardisation work is known to focus more on protecting the objectives of organisations using AI.
This in turn also impacts the standards’ requirements for data governance, where they should prescribe the adoption of data governance and quality measures specifically tailored to these risks, not just aiming to achieve broader organisational objectives.
Another important requirement for harmonised AI standards, is that oversight of AI systems needs to lead to verifiable outcomes. This means that the effectiveness of measures implemented to prevent and minimise any relevant risks is ultimately demonstrated through testing, involving natural persons when needed.
As consensus on fundamental topics starts to emerge, steady progress is required to complete drafting on time. To understand the range of characteristics needed for the future standards for the AI Act, read the full policy brief here.
1A transition period of 3 years is defined for systems embedded in products already subject to third-party conformity assessment identified in Annex I to the Regulation, and 2 years for other high-risk systems identified in Annex III to the Regulation.
Details
- Publication date
- 25 October 2024
- Author
- Joint Research Centre