By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

EU AI Act: A brief overview

As part of their AI strategy, the European Union proposed the EU AI Act to ensure AI is developed and used in a safe, reliable, and transparent way. The new regulation classifies AI systems by their risks and has significant implications for organizations that develop or use AI systems within the EU. This article briefly summarizes the EU AI Act and how you can prepare for it today.

EU AI Act: A brief overview

Sleek v2.0 public release is here

Lorem ipsum dolor sit amet, consectetur adipiscing elit lobortis arcu enim urna adipiscing praesent velit viverra sit semper lorem eu cursus vel hendrerit elementum morbi curabitur etiam nibh justo, lorem aliquet donec sed sit mi at ante massa mattis.

  1. Neque sodales ut etiam sit amet nisl purus non tellus orci ac auctor
  2. Adipiscing elit ut aliquam purus sit amet viverra suspendisse potent i
  3. Mauris commodo quis imperdiet massa tincidunt nunc pulvinar
  4. Adipiscing elit ut aliquam purus sit amet viverra suspendisse potenti

What has changed in our latest release?

Lorem ipsum dolor sit amet, consectetur adipiscing elit ut aliquam, purus sit amet luctus venenatis, lectus magna fringilla urna, porttitor rhoncus dolor purus non enim praesent elementum facilisis leo, vel fringilla est ullamcorper eget nulla facilisi etiam dignissim diam quis enim lobortis scelerisque fermentum dui faucibus in ornare quam viverra orci sagittis eu volutpat odio facilisis mauris sit amet massa vitae tortor condimentum lacinia quis vel eros donec ac odio tempor orci dapibus ultrices in iaculis nunc sed augue lacus

All new features available for all public channel users

At risus viverra adipiscing at in tellus integer feugiat nisl pretium fusce id velit ut tortor sagittis orci a scelerisque purus semper eget at lectus urna duis convallis. porta nibh venenatis cras sed felis eget neque laoreet libero id faucibus nisl donec pretium vulputate sapien nec sagittis aliquam nunc lobortis mattis aliquam faucibus purus in.

  • Neque sodales ut etiam sit amet nisl purus non tellus orci ac auctor
  • Adipiscing elit ut aliquam purus sit amet viverra suspendisse potenti
  • Mauris commodo quis imperdiet massa tincidunt nunc pulvinar
  • Adipiscing elit ut aliquam purus sit amet viverra suspendisse potenti
Coding collaboration with over 200 users at once

Nisi quis eleifend quam adipiscing vitae aliquet bibendum enim facilisis gravida neque. Velit euismod in pellentesque massa placerat volutpat lacus laoreet non curabitur gravida odio aenean sed adipiscing diam donec adipiscing tristique risus. amet est placerat in egestas erat imperdiet sed euismod nisi.

“Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum”
Real-time code save every 0.1 seconds

Eget lorem dolor sed viverra ipsum nunc aliquet bibendum felis donec et odio pellentesque diam volutpat commodo sed egestas aliquam sem fringilla ut morbi tincidunt augue interdum velit euismod eu tincidunt tortor aliquam nulla facilisi aenean sed adipiscing diam donec adipiscing ut lectus arcu bibendum at varius vel pharetra nibh venenatis cras sed felis eget dolor cosnectur drolo.

What is the EU AI Act?

Introduced in April 2021 by the European Commission, the EU AI Act is a first-of-its-kind legal framework that tries to regulate the use of AI systems across the EU to ensure safety, reliability, and transparency. The EU wants to make sure that AI is aligned with existing laws on fundamental rights and Union values.
As different AI applications impose different risks, the proposal follows a risk-based approach which leads to a horizontal regulation (i.e. applicable across sectors):

  • Unacceptable risk (e.g., social scoring, remote biometric identification, or subliminal manipulation)
  • High risk (e.g., AI in recruitment, law enforcement, finance/insurance, biometric identification, or safety components in regulated systems, such as medical devices)
  • Medium risk (e.g., interaction with a chatbot, deep fakes)
  • Low risk (e.g., spam filter)

Depending on the risk classification, the AI system may be prohibited, specific requirements must be fulfilled, or users interacting with the AI must be notified about this interaction. This applies to any AI system affecting a natural person in the EU. Read more about how AI risk is classified in this article.

The EU’s definition of AI

Update [11.05.2023]: The European Parliament agreed on a new, broader definition to align with the OECD’s definition. Now, an AI system in light of the EU AIA is defined as a machine-based system which operates at different levels of autonomy, which produces predictions, recommendations or decisions with influence on physical or virtual environments.

This new definition is heavily criticized by people in the AI ecosystem, as it might include simpler, rule-based systems, which are not seen as AI systems by the public, and because the definition of autonomy is broad.

Previously, the EU defined an AI system as software developed with machine learning or logic- and knowledge-based approaches that produce content, predictions, recommendations, decisions, or similar output with influence on the environment the AI is interacting with. This also includes AI systems that may be part of a hardware device.

What are the benefits for society? 

The EU wants to make sure that citizens are safe from any negative consequences of AI. Thus, the EU AI Act aims to help ensure that organizations that use AI to make decisions do not discriminate against people or that these systems are not biased against certain groups based on race, gender, religion, or any other attribute.

Organizations using AI systems must provide an explanation for every outcome as well as the decision-making process behind these outcomes. This will pose significant challenges for organizations, as the current AI development process is very scattered and lacks transparency. Solutions to unlock structured and transparent processes will be needed.

The EU AI Act gives affected individuals also the right to challenge the decisions made by algorithms and have them reviewed by data scientists of the responsible organizations.

Current status and timeline of the EU AI Act

As of early 2023, the proposed EU AI Act is still under development and not yet finalized. The EU AI Act is expected to become effective in late 2023 or early 2024. After that, harmonized standards have to be established and translated into national laws – a process estimated to take another two years.

Update [11.05.2023]: The European Parliament recently voted on their amendments to the initial proposal of the European Commission. This means that the trilogues on the final form of the EU AIA between the Commission, the Council and the Parliament are expected to take place this summer.

The timeline of the EU AI Act: Proposed in April 2021, it is expected to enter into force by earliest 2024.
Timeline of the EU AI Act

High-Risk applications in the EU AI Act

AI systems that will be classified as high-risk can still be used and developed, as long as they fulfill the proposed requirements. This includes a “conformity assessment” (or audit) to ensure that the developed AI system complies with the EU AI Act, which must be repeated when large modifications to the system are made.

It will also be mandatory to monitor the risk and quality of the system while it is in use, which includes:

  • Technical documentation and record-keeping
  • Human oversight
  • Accurate, robust, and secure models
  • Transparency regarding the output and the model.

Non-compliance with the EU AI Act will cause penalties of up to €30 million or 6% of the organization's global revenue.

How to prepare

While the impact of the EU AI Act will be drastic and costly for some organizations, the good news is there is still enough time to prepare and to adopt your AI system to the law. Nevertheless, starting as soon as possible is important to ensure that the AI systems you are currently using and developing are compliant once the EU AI Act becomes mandatory. Otherwise, you face the risk of having to take them offline or paying high fees.

As pointed out earlier, the EU’s AI strategy is to make AI trustworthy, and the key element of that is ensuring transparency through all development stages. This doesn’t only make compliance easier, but it also helps in bringing everybody involved during AI development on the same page.

We at trail want you to fully understand the whole development process, regardless of your (non-)technical background. Check out here, how we can help you with documentation, audits, and understanding model as well as data during experimentation.