The EU AI Act classifies AI systems into four different risk levels: unacceptable, high, limited, and minimal risk. Each class has different regulation and requirements for organizations developing or using AI systems. This article explains how AI systems and GPAI are classified and gives examples for high-risk cases.
Lorem ipsum dolor sit amet, consectetur adipiscing elit lobortis arcu enim urna adipiscing praesent velit viverra sit semper lorem eu cursus vel hendrerit elementum morbi curabitur etiam nibh justo, lorem aliquet donec sed sit mi at ante massa mattis.
Lorem ipsum dolor sit amet, consectetur adipiscing elit ut aliquam, purus sit amet luctus venenatis, lectus magna fringilla urna, porttitor rhoncus dolor purus non enim praesent elementum facilisis leo, vel fringilla est ullamcorper eget nulla facilisi etiam dignissim diam quis enim lobortis scelerisque fermentum dui faucibus in ornare quam viverra orci sagittis eu volutpat odio facilisis mauris sit amet massa vitae tortor condimentum lacinia quis vel eros donec ac odio tempor orci dapibus ultrices in iaculis nunc sed augue lacus
At risus viverra adipiscing at in tellus integer feugiat nisl pretium fusce id velit ut tortor sagittis orci a scelerisque purus semper eget at lectus urna duis convallis. porta nibh venenatis cras sed felis eget neque laoreet libero id faucibus nisl donec pretium vulputate sapien nec sagittis aliquam nunc lobortis mattis aliquam faucibus purus in.
Nisi quis eleifend quam adipiscing vitae aliquet bibendum enim facilisis gravida neque. Velit euismod in pellentesque massa placerat volutpat lacus laoreet non curabitur gravida odio aenean sed adipiscing diam donec adipiscing tristique risus. amet est placerat in egestas erat imperdiet sed euismod nisi.
“Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum”
Eget lorem dolor sed viverra ipsum nunc aliquet bibendum felis donec et odio pellentesque diam volutpat commodo sed egestas aliquam sem fringilla ut morbi tincidunt augue interdum velit euismod eu tincidunt tortor aliquam nulla facilisi aenean sed adipiscing diam donec adipiscing ut lectus arcu bibendum at varius vel pharetra nibh venenatis cras sed felis eget dolor cosnectur drolo.
We previously outlined the EU AI Act in a short article, where we already showed that the EU wants to take a risk-based approach in regulating AI systems. Different use-cases entail different levels of risk, which the EU Commission describes in its proposal. This article drills down on the proposed risk-classifications and the corresponding application areas to help you understand whether your AI use-case classifies as high-risk.
While the EU AI Act slowly approaches the final phase, the debate about risk classification is still ongoing. Especially the classification of general-purpose AI systems (GPAI) and who is responsible for them has played a significant role in the discussions most recently, but also the definitions and obligations of high-risk use-cases or AI systems. Therefore, you can expect changes to these parts of the EU AI Act, while the core structure of each risk-classification probably won't change.
The EU's Artificial Intelligence Act (AIA) sets out four risk levels for AI systems: unacceptable, high, limited, and minimal (or no) risk. There will be different regulations and requirements for each class.
Unacceptable risk is the highest level of risk. This tier can be divided into eight (initially four) AI application types that are incompatible with EU values and fundamental rights. These are applications related to:
AI systems related to these areas will be prohibited in the EU.
High-risk AI systems will be the most regulated systems allowed in the EU market. In essence, this level includes safety components of already regulated products and stand-alone AI systems in specific areas (see below), which could negatively affect the health and safety of people, their fundamental rights or the environment. These AI systems can potentially cause significant harm if they fail or are misused. We will detail what classifies as high-risk in the next section.
The third level of risk is limited risk, which includes AI systems with a risk of manipulation or deceit. AI systems falling under this category must be transparent, meaning humans must be informed about their interaction with the AI (unless this is obvious), and any deep fakes should be denoted as such. For example chatbots or emotion recognition systems classify as limited risk.
The lowest level of risk described by the EU AI Act is minimal risk. This level includes all other AI systems that do not fall under the above-mentioned categories, such as a spam filter. AI systems under minimal risk do not have any restrictions or mandatory obligations. However, it is suggested to follow general principles such as human oversight, non-discrimination, and fairness.
Note that these risk classifications are subject to change, meaning that AI applications can be added to (or removed from), for instance, the high-risk section.
The high-risk classification of AI systems defined by the EU AI Act is the most controversial and discussed area, as it imposes a significant burden on organizations. As previously mentioned, this includes all AI applications that could negatively affect the health and safety of people, their fundamental rights or the environment. To be put on the market and operated in the EU, AI systems in this risk-class must meet certain requirements.
One part that falls under this classification is AI systems related to the safety components of regulated products, i.e., products already subject to third-party assessments. These are, for example, AI applications integrated into medical devices, lifts, vehicles, or machinery.
Annex III of the AI Act identifies additional areas which would classify stand-alone AI systems as high-risk. These include:
(a) biometric and biometrics-based systems (such as biometric identification of persons)
(b) management and operation of critical infrastructure (such as road traffic and energy supply),
(c) education and vocational training (such as assessment of students in educational institutions),
(d) employment and workers management (such as recruitment, performance evaluation, or task-allocation),
(e) access to essential private and public services and benefits (such as credit-scoring and dispatching emergency services),
(f) law enforcement (such as evaluating the reliability of evidence or crime analytics),
(g) migration, asylum and border control management (such as assessing the security risk of a person or the examination of applications for asylum, visa, or residence permits),
(h) administration of justice and democratic processes (such as assisting in interpreting & researching facts, law, and the application of the law or in political campaigns), and
(i) recommender systems of social media platforms, which are designated as very large online platforms in the Digital Services Act (such as recommendation of user-generated content)
While the original proposal of the EU AI Act certainly didn't mention General Purpose AI systems, such as those from OpenAI or Aleph Alpha, the EU Commission updated its proposal also in this regard recently. Above, we've seen that the risk classification depends on the use-case of the AI system, which is difficult to limit with a GPAI. Presently, the EU Commission suggests handling GPAI as high-risk systems, but allows exceptions if the provider of the GPAI explicitly excludes the high-risk uses and tries its best to prevent any "misuse" (see Title IA in the EU AIA).
The European Parliament also addressed GPAI in their latest amendments of the proposal: Providers must guarantee the protection of fundamental rights, health, safety, the environment, democracy and the law. Generative models, such as ChatGPT, are subject to new transparency requirements, e.g. a model design that prevents it from publishing summaries of copyrighted data.
The EU also wants to make an online register publicly accessible, listing all deployed high-risk AI systems and use-cases, as well as foundation models on the market (Article 60 of the EU AIA).
The EU AI Act proposes a risk-based approach to regulating AI systems, with four levels of risk: unacceptable, high, limited, and minimal (or no) risk. Each level is subject to different degrees of regulations and requirements.
Unacceptable risk is the highest level of risk and covers eight main types of AI applications incompatible with EU values and fundamental rights. These applications will be prohibited in the EU.
High-risk AI systems are the most regulated systems allowed in the EU market and include safety components of already regulated products and stand-alone AI systems in specific areas. This level imposes significant burdens on organizations and requires AI systems to meet certain requirements before they can be put on the market and operated in the EU. GPAIs are in most cases also subject to high-risk obligations.
Limited risk includes AI systems with a risk of manipulation or deceit. These AI systems must be transparent, and humans must be informed about their interaction with the AI.
Minimal risk includes all other AI systems not falling under the above categories. AI systems under minimal risk do not have any restrictions or mandatory obligations, but it is suggested to follow general principles such as human oversight, non-discrimination, and fairness.
If your use-case classifies as high-risk, you should start preparing for the regulation with its extensive documentation already today to make sure you stay competitive.
At trail, we help you fully understand your AI development process to mitigate possible risks early on and generate automated audit-ready development documentation to minimize manual overhead. Contact us here to get started today.
[Last updated after the European Parliament voting on 11/05/2023]