By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

EU AI Act: How risk is classified

The EU AI Act classifies AI systems into four different risk levels: unacceptable, high, limited, and minimal risk. Each class has different regulation and requirements for organizations developing or using AI systems. This article explains how AI systems and GPAI are classified and gives examples for high-risk cases.

EU AI Act: How risk is classified

Sleek v2.0 public release is here

Lorem ipsum dolor sit amet, consectetur adipiscing elit lobortis arcu enim urna adipiscing praesent velit viverra sit semper lorem eu cursus vel hendrerit elementum morbi curabitur etiam nibh justo, lorem aliquet donec sed sit mi at ante massa mattis.

  1. Neque sodales ut etiam sit amet nisl purus non tellus orci ac auctor
  2. Adipiscing elit ut aliquam purus sit amet viverra suspendisse potent i
  3. Mauris commodo quis imperdiet massa tincidunt nunc pulvinar
  4. Adipiscing elit ut aliquam purus sit amet viverra suspendisse potenti

What has changed in our latest release?

Lorem ipsum dolor sit amet, consectetur adipiscing elit ut aliquam, purus sit amet luctus venenatis, lectus magna fringilla urna, porttitor rhoncus dolor purus non enim praesent elementum facilisis leo, vel fringilla est ullamcorper eget nulla facilisi etiam dignissim diam quis enim lobortis scelerisque fermentum dui faucibus in ornare quam viverra orci sagittis eu volutpat odio facilisis mauris sit amet massa vitae tortor condimentum lacinia quis vel eros donec ac odio tempor orci dapibus ultrices in iaculis nunc sed augue lacus

All new features available for all public channel users

At risus viverra adipiscing at in tellus integer feugiat nisl pretium fusce id velit ut tortor sagittis orci a scelerisque purus semper eget at lectus urna duis convallis. porta nibh venenatis cras sed felis eget neque laoreet libero id faucibus nisl donec pretium vulputate sapien nec sagittis aliquam nunc lobortis mattis aliquam faucibus purus in.

  • Neque sodales ut etiam sit amet nisl purus non tellus orci ac auctor
  • Adipiscing elit ut aliquam purus sit amet viverra suspendisse potenti
  • Mauris commodo quis imperdiet massa tincidunt nunc pulvinar
  • Adipiscing elit ut aliquam purus sit amet viverra suspendisse potenti
Coding collaboration with over 200 users at once

Nisi quis eleifend quam adipiscing vitae aliquet bibendum enim facilisis gravida neque. Velit euismod in pellentesque massa placerat volutpat lacus laoreet non curabitur gravida odio aenean sed adipiscing diam donec adipiscing tristique risus. amet est placerat in egestas erat imperdiet sed euismod nisi.

“Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum”
Real-time code save every 0.1 seconds

Eget lorem dolor sed viverra ipsum nunc aliquet bibendum felis donec et odio pellentesque diam volutpat commodo sed egestas aliquam sem fringilla ut morbi tincidunt augue interdum velit euismod eu tincidunt tortor aliquam nulla facilisi aenean sed adipiscing diam donec adipiscing ut lectus arcu bibendum at varius vel pharetra nibh venenatis cras sed felis eget dolor cosnectur drolo.

We previously outlined the EU AI Act in a short article, where we already showed that the EU wants to take a risk-based approach in regulating AI systems. Different use-cases entail different levels of risk, which the EU co-legislators describe in their agreement. This article drills down on the risk-classifications and the corresponding application areas to help you understand whether your AI use-case classifies as high-risk.

Current Status

After a long negotiation period, the European Commission, Council, Parliament and the EU Member States came to a final agreement in early 2024. Hence, the EU AI Act will be implemented in mid-2024. The first provisions of the AI Act will be enforced in late 2024.

Risk-Classifications according to the EU AI Act

The EU's Artificial Intelligence Act (AIA) sets out four risk levels for AI systems: unacceptable, high, limited, and minimal (or no) risk. There will be different regulations and requirements for each class.

Unacceptable risk is the highest level of risk. This tier can be divided into eight (initially four) AI application types that are incompatible with EU values and fundamental rights. These are applications related to:

  1. Subliminal manipulation: changing a person's behavior without them being aware of it, which would harm a person in any way. An example could be a system that influences people to vote for a particular political party without their knowledge or consent.
  2. Exploitation of the vulnerabilities of persons resulting in harmful behavior: this includes social or economic situation, age and physical or mental ability. For instance, a toy with voice assistants that may animate children to do dangerous things.
  3. Biometric categorization of persons based on sensitive characteristics: this includes gender, ethnicity, political orientation, religion, sexual orientation and philosophical beliefs.
  4. General purpose social scoring: using AI systems to rate individuals based on their personal characteristics, social behavior and activities, such as online purchases or social media interactions. The concern is that, for example, someone could be denied a job or a loan simply because of their social score that was derived from their shopping behavior or social media interactions, which might be unjustified or unrelated.
  5. Real-time remote biometric identification (in public spaces): biometric identification systems will be completely banned, including ex-post identification. Exceptions can be made for law enforcement with judicial approval and the Commission’s supervisory. This is only possible for the pre-defined purposes of targeted search of crime victims, terrorism prevention and targeted search of serious criminals or suspects (e.g. trafficking, sexual exploitation, armed robbery, environmental crime).
  6. Assessing the emotional state of a person: this holds for AI systems at the workplace or in education. Emotion recognition may be allowed as high-risk application, if they have a safety purpose (e.g. detect if a driver falls asleep).
  7. Predictive policing: assessing the risk of persons for committing a future crime based on personal traits.
  8. Scraping facial images: creating or expanding databases with untargeted scraping of facial images available on the internet or from video surveillance footage.

AI systems related to these areas will be prohibited in the EU.

High-risk AI systems will be the most regulated systems allowed in the EU market. In essence, this level includes safety components of already regulated products and stand-alone AI systems in specific areas (see below), which could negatively affect the health and safety of people, their fundamental rights or the environment. These AI systems can potentially cause significant harm if they fail or are misused. We will detail what classifies as high-risk in the next section.

The third level of risk is limited risk, which includes AI systems with a risk of manipulation or deceit. AI systems falling under this category must be transparent, meaning humans must be informed about their interaction with the AI (unless this is obvious), and any deep fakes should be denoted as such. For example, chatbots classify as limited risk. This is especially relevant for generative AI systems and its content.

The lowest level of risk described by the EU AI Act is minimal risk. This level includes all other AI systems that do not fall under the above-mentioned categories, such as a spam filter. AI systems under minimal risk do not have any restrictions or mandatory obligations. However, it is suggested to follow general principles such as human oversight, non-discrimination, and fairness.

Note that these risk classifications are still subject to minor changes.

There are four risk classes in the EU AI Act: Unacceptable, high, limited and minimal risk. Each level is obliged to different rules.
The four risk classes of the EU AI Act

What counts as high-risk in the EU AI Act?

The high-risk classification of AI systems defined by the EU AI Act was one of the most controversial and discussed area, as it imposes a significant burden on organizations. As previously mentioned, this includes all AI applications that could negatively affect the health and safety of people, their fundamental rights or the environment. To be put on the market and operated in the EU, AI systems in this risk-class must meet certain requirements.

One part that falls under this classification is AI systems related to the safety components of regulated products, i.e., products already subject to third-party assessments. These are, for example, AI applications integrated into medical devices, lifts, vehicles, or machinery.

Annex III of the AI Act identifies additional areas which would classify stand-alone AI systems as high-risk. These include:

(a) biometric and biometrics-based systems (such as biometric identification and categorization of persons),

(b) management and operation of critical infrastructure (such as road traffic and energy supply),

(c) education and vocational training (such as assessment of students in educational institutions),

(d) employment and workers management (such as recruitment, performance evaluation, or task-allocation),

(e) access to essential private and public services and benefits (such as credit-scoring and dispatching emergency services),

(f) law enforcement (such as evaluating the reliability of evidence or crime analytics),

(g) migration, asylum and border control management (such as assessing the security risk of a person or the examination of applications for asylum, visa, or residence permits),

(h) administration of justice and democratic processes (such as assisting in interpreting & researching facts, law, and the application of the law or in political campaigns).

High-risk AI systems in the EU AIA can be categorized into eight areas. This includes AI systems in safety components or regulated systems.
EU AI Act high-risk AI systems

Consult this article to learn how to meet the regulatory requirements if you develop or deploy a high-risk AI system. Law enforcement authorities may employ high-risk systems related to public security without prior conformity assessment with judicial authorisation.

The EU also wants to make an online register publicly accessible, listing all deployed high-risk AI systems and use-cases, as well as foundation models on the market (Article 60 of the EU AIA). Only law-enforcement agencies (police and migration control) are enabled to register their systems non-publicly, accessible to an independent supervisor.

GPAI

While the original proposal of the EU AI Act certainly didn't mention General Purpose AI systems, such as those from OpenAI or Aleph Alpha, the EU updated its proposal also in this regard during the negotiations. Above, we've seen that the risk classification depends on the use-case of the AI system, which is difficult to limit with a GPAI. The EU AI Act differentiates between two risk-classes: non-systemic and systemic risk, depending on the computing power required to train the model. While all foundational models will need to meet transparency requirements, those with a systemic risk have much stricter obligations. GPAI creators must provide relevant information to downstream providers who use these models in a high-risk application.

Publicly available open-source models can avoid stricter requirements if their license allows for access, usage, modification and distribution of the model and its parameters. This holds true as long as there is no relation to high-risk or prohibited applications or no risk of manipulation. Learn more about how the AI Act treats GPAI and GenAI systems in this article.

Conclusion

The EU AI Act proposes a risk-based approach to regulating AI systems, with four levels of risk: unacceptable, high, limited, and minimal (or no) risk. Each level is subject to different degrees of regulations and requirements. Additionally, the AIA differentiates between non-systemic and systemic risk when it comes to GPAI.

Unacceptable risk is the highest level of risk and covers eight main types of AI applications incompatible with EU values and fundamental rights. These applications will be prohibited in the EU.

High-risk AI systems are the most regulated systems allowed in the EU market and include safety components of already regulated products and stand-alone AI systems in specific areas. This level imposes significant burdens on organizations and requires AI systems to meet certain requirements before they can be put on the market and operated in the EU.

Limited risk includes AI systems with a risk of manipulation or deceit. These AI systems must be transparent, and humans must be informed about their interaction with the AI.

Minimal risk includes all other AI systems not falling under the above categories. AI systems under minimal risk do not have any restrictions or mandatory obligations, but it is suggested to follow general principles such as human oversight, non-discrimination, and fairness.

GPAIs are subject to transparency obligations, which become stricter when a systemic risk exists, i.e. if the model is powerful.

If your use-case classifies as high-risk, you should start preparing for the regulation with its extensive documentation already today to make sure you stay competitive.

At trail, we help you fully understand your AI development process to mitigate possible risks early on and generate automated audit-ready development documentation to minimize manual overhead. Contact us here to get started today or learn here how we can help you coping with the EU AI Act.

[Last updated after the provisional agreement of the EU co-legislators on 08.12.2023]