Skip to content

Policy Prototyping: an assessment of Articles 5 & 6 of the EU AI act

Contributors:

  • Thomas Gils (CiTiP KU Leuven)
  • Fredric Heymans (CiTiP KU Leuven)
  • Jan De Bruyne (CiTiP KU Leuven)
  • Rob Heyman (CiTiP KU Leuven)

Published by: Knowledge Centre Data & Society

Context

The Knowledge Centre for Data & Society monitors relevant EU policy developments and initiatives related to data and AI. An important example is the proposed AI Act (AIA) published in April 2021. This is a very comprehensive policy proposal and may have far-reaching impacts on actors in the AI ecosystem. As a Knowledge Centre, we want to be able to better assess that impact. Therefore, we are collecting feedback from different stakeholders by means of a policy prototyping exercise. We plan to use the results of this exercise to enrich the Flemish, Belgian and European debate on the AIA.

What is policy prototyping?

Policy prototyping (PP) refers to an innovative way of policy making, similar to product or beta testing. In PP, there is a phase in which design/prototype rules are tested in practice before they are finalised and declared.

Why did we do a PP of the scope of the proposed AI act?

Flanders invests in AI and the AIA imposes some red lines that AI applications have to follow. For instance, an AI application can be a “prohibited” or “high-risk” application, which has consequences in terms of obligations to be respected. However, if the scope and definitions of the AIA are too broad or too strict, now is the time to adjust them by providing timely feedback.

The Knowledge Centre therefore uses a PP exercise to review the definition of AI system and Articles 5 (connected AI systems) and 6 (high-risk AI systems) of the AIA. Thus, in the context of this exercise, the AIA in its current wording is the prototype. In order to test these provisions and simulate their compliance by organisations, we have prepared two surveys.

How did we proceed?

To test the scope of the AIA, we made two surveys: The first checklist survey helps to evaluate whether your AI application would fall under the scope of the regulation. If it does, you can find out if the application is in the prohibited or high-risk category. You can let us know if you agree or not.

The second survey assesses the clarity and usefulness of the concepts of the AIA and can be used to provide feedback if, for example, the definitions are unclear or inconsistent.

Where possible, we will also conduct some in-depth interviews with respondents to get a more accurate picture of their feedback.

Results

The first striking finding is that the majority of the respondents indicated that the definition of AI, as proposed by the AI Act, is clear. This is surprising considering the ongoing policy debate and the input we received from stakeholders during discussions outside of this exercise. Additionally, the majority also indicated that the definitions of AI techniques in Annex I of the AI Act are clear. This is rather interesting, however, considering that some very broad terms were included in this list (e.g. statistical approaches or search and optimisation methods). Additionally, no respondent seemed to have an issue with the alleged lack of ‘technological neutrality’ of this list (i.e. referring to only currently known AI approaches).

The second noteworthy finding following the survey results is the high number of AI applications that fall within the scope of the high-risk category. More specifically, 58 percent of survey participants indicated that their AI system is considered high risk under the AI Act, which is a significant proportion. However, 55 percent of this group disagrees with respect to the risk level attributed to their applications.

Among participants whose AI application does not belong to the prohibited or high-risk category, there is less disagreement. 63 percent agree with the result, only a small percentage of participants think their AI applications should be under a stricter regime.

Participants were also invited to provide feedback on the definitions and concepts used in articles 5 and 6 of the draft AI Act. More specifically, they were asked to assess the understandability and clarity of the various concepts used. Below, we present some insights from the survey.

Participants were asked to review the definition of the high-risk category. There are two types of high-risk categories. Either an AI system is a product or a safety component of a product, covered by certain Union harmonisation legislation (listed in Annex II, section A of the AI Act) and such product is required to undergo a third-party conformity assessment. Or, the AI system is applied in a certain sector with a specific purpose (as listed in Annex III).

Regarding the first type of high-risk AI systems, participants found that the harmonisation legislation listed in Annex II is in many cases not specific enough.

Concerning the second type of high-risk AI systems, we assessed the description of the different sectors and purposes. Regarding, “biometric identification and categorisation of natural persons”, the description of the purpose (i.e. AI systems intended to be used for “real-time” and “post” remote biometric identification of natural persons) still raises many questions.

For example, the use of the term ‘remote’ creates uncertainty. It is not clear to participants whether this is about physical distance and what this means for fingerprint-based identification. The description of the sector also leads to confusion. Namely, it points to both “identification” and “categorisation”, while the latter term is not contained in the actual purpose description. Indeed, there is no mention of assigning a person to a category in the purpose description.

Concerns also emerge regarding the sector of “Management and operation of critical infrastructure” where the specific use of AI systems as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity, is considered to be high risk.

The text defines safety component as follows: a component of a product or of a system which fulfils a safety function for that product or system or the failure or malfunctioning of which endangers the health and safety of persons or property.

For some participants, the term “safety components” is too general and not specific enough. Participants suggested including concrete examples in the preamble or recitals. The definition is also broader than what the term suggests. Many parts of a product or system can cause health damage or safety risks if they fail or malfunction, without the term “safety component” being associated with them. One of the participants gives the following example: “e.g. pressure relief valve of a high-pressure cooking pot is a safety component. But the lid shouldn’t be categorised as such. However, a sudden crack in the lid (system component) can lead to health risks in case of failure. This makes the lid also a safety component.”

The definition of a safety component is also incomplete, according to some. In addition to people and property, the scope of the potential danger should also include animals or even flora.

The fifth sector of high-risk applications concerns “Access to and enjoyment of essential private and public services and benefits”. The second specific purpose under that sector refers to “AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small-scale providers for their own use”

According to several participants, the reference to “AI systems put into service by small-scale providers for their own use” creates ambiguity and is a source of unclarity. Who are small-scale providers? SMEs? When is something small-scale? etc.

The term “own use” was also considered too vague by some participants. Finally, the question was asked why there should be any difference at all between large and small users in terms of their own use.

The first policy prototyping exercise (or rather experiment) led to some interesting findings, but the importance of those findings should be nuanced. Firstly, only a limited number of stakeholders provided feedback which impacts the representativity of the findings. Secondly, our survey approach did not result in findings or results that enabled us to come up with improved wording of the AI Act. This should, however, be the goal of policy prototyping: not only test, but also improve the proposed policy. Therefore, we will adopt a new policy prototyping approach in the future with increased stakeholder involvement and more interaction.

Related: Blog post written by Thomas Gils and Frederic Heymans


Last update: 2022.11.2, v0.1