By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

EU AI Act: 10 things high-risk companies need to know!

The EU AI Act establishes requirements and obligations for providers of high-risk systems. We summarized them in 10 points that companies in high-risk sectors should be aware of.

EU AI Act: 10 things high-risk companies need to know!

Sleek v2.0 public release is here

Lorem ipsum dolor sit amet, consectetur adipiscing elit lobortis arcu enim urna adipiscing praesent velit viverra sit semper lorem eu cursus vel hendrerit elementum morbi curabitur etiam nibh justo, lorem aliquet donec sed sit mi at ante massa mattis.

  1. Neque sodales ut etiam sit amet nisl purus non tellus orci ac auctor
  2. Adipiscing elit ut aliquam purus sit amet viverra suspendisse potent i
  3. Mauris commodo quis imperdiet massa tincidunt nunc pulvinar
  4. Adipiscing elit ut aliquam purus sit amet viverra suspendisse potenti

What has changed in our latest release?

Lorem ipsum dolor sit amet, consectetur adipiscing elit ut aliquam, purus sit amet luctus venenatis, lectus magna fringilla urna, porttitor rhoncus dolor purus non enim praesent elementum facilisis leo, vel fringilla est ullamcorper eget nulla facilisi etiam dignissim diam quis enim lobortis scelerisque fermentum dui faucibus in ornare quam viverra orci sagittis eu volutpat odio facilisis mauris sit amet massa vitae tortor condimentum lacinia quis vel eros donec ac odio tempor orci dapibus ultrices in iaculis nunc sed augue lacus

All new features available for all public channel users

At risus viverra adipiscing at in tellus integer feugiat nisl pretium fusce id velit ut tortor sagittis orci a scelerisque purus semper eget at lectus urna duis convallis. porta nibh venenatis cras sed felis eget neque laoreet libero id faucibus nisl donec pretium vulputate sapien nec sagittis aliquam nunc lobortis mattis aliquam faucibus purus in.

  • Neque sodales ut etiam sit amet nisl purus non tellus orci ac auctor
  • Adipiscing elit ut aliquam purus sit amet viverra suspendisse potenti
  • Mauris commodo quis imperdiet massa tincidunt nunc pulvinar
  • Adipiscing elit ut aliquam purus sit amet viverra suspendisse potenti
Coding collaboration with over 200 users at once

Nisi quis eleifend quam adipiscing vitae aliquet bibendum enim facilisis gravida neque. Velit euismod in pellentesque massa placerat volutpat lacus laoreet non curabitur gravida odio aenean sed adipiscing diam donec adipiscing tristique risus. amet est placerat in egestas erat imperdiet sed euismod nisi.

“Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum”
Real-time code save every 0.1 seconds

Eget lorem dolor sed viverra ipsum nunc aliquet bibendum felis donec et odio pellentesque diam volutpat commodo sed egestas aliquam sem fringilla ut morbi tincidunt augue interdum velit euismod eu tincidunt tortor aliquam nulla facilisi aenean sed adipiscing diam donec adipiscing ut lectus arcu bibendum at varius vel pharetra nibh venenatis cras sed felis eget dolor cosnectur drolo.

The EU has proposed the EU AI Act to ensure trustworthy and ethical AI across Europe. It introduces a range of requirements and obligations for companies providing or using AI, especially in so-called “high-risk areas”. Find out if your AI system qualifies as high-risk and what that means for you below!

Key Takeaways:

  • Companies that fail to comply with the AI Act may face significant penalties, including fines of up to 7% of the company’s annual global turnover or €35 million (violations regarding prohibited systems), whichever is higher.
  • If your AI system qualifies as high-risk, you must meet various requirements during development and post-market.
  • Key obligations for high-risk providers include setting up an extensive quality management system, keeping logs, preparing detailed technical reports that act as a basis for audits, undergoing a conformity assessment, and implementing human oversight, among others.

What is the EU AI Act?

The EU AI Act is a legislative proposal introduced on the 21st of April 2021 by the European Commission, and was approved by the EU co-legislators and the EU Member States at the beginning of 2024. It aims to regulate AI across the EU to make it trustworthy and ethical. The Act introduces a risk-based approach, specifying four different levels of risk: unacceptable risk, high risk, limited risk, and minimal risk. It also regulates general purpose AI systems (GPAI) extensively. The compliance obligations for companies vary according to these categories, with companies in the high-risk category having the highest obligations to fulfill.

Does my AI classify as high-risk?

The EU defines high-risk AI systems as those that have the potential to cause significant harm to EU citizens or the environment. Examples of high-risk AI systems include those used in critical infrastructure, transportation, and healthcare. The Act also includes AI systems that are used to make decisions that have legal or similarly significant effects, such as credit scoring or hiring decisions.

Consult this article to find out whether your system classifies as high-risk! Or see Annex I and III of the EU AI Act which provide an extensive list of AI systems that classify as high-risk.

Be cautious: failing to comply can cost you millions!

Companies that fail to comply with the AI Act and its extensive measures for high-risk AI Systems may face significant penalties. The Act introduces fines of up to 7% of the company’s annual global annual turnover or €35 million, whichever is higher, for violations on the prohibited systems. All other violations can receive fines of up to €15 million or 3% of the global turnover. Companies may also face reputational damage and legal action from affected individuals.

This is what’s coming: The 10 Requirements & Obligations for High-Risk Systems

Chapter3 of the EU AI Act establishes requirements and obligations for providers of high-risk systems. We summarized them below to give a better overview of what lies ahead for companies in the high-risk sectors. The AI Act has been criticized for the missing specification of how to implement those requirements, which shifts the focus to guiding standards that have to be developed.

The EU AI Act currently attaches a single set of generic requirements to all high-risk AI systems, but the requirements should be adapted to different applications to minimize overhead and ensure the appropriateness of measures.

EU AI Act high-risk obligations
Overview of EU AI Act High-Risk Obligations

1. Set up a Quality Management System that includes the following:

  • Risk management system as a continuous & iterative process throughout the entire lifecycle to evaluate and mitigate all possible risks and ensure adequate design & development of the AI system.
  • Data management system to enable data governance. Split and document data in training, testing, and validation data sets. Formulate relevant assumptions before assessing the data sets and document quality and distribution, including bias detection tests.
  • Post-market monitoring processes & reporting of incidents.
  • Test processes for the AI system throughout the development process and before placing the system on the market. Define suitable test metrics and thresholds before starting development.
  • Accountability framework with responsibilities.
  • Reporting proportionate to the size of the organization.

2. Conduct a fundamental rights impact assessment considering aspects such as the potential negative impact on marginalized groups and the environment.

3. Provide contact information for users / other stakeholders.

4. Keep logs over the duration of the system’s life cycle to enable traceability and monitoring. The logs must include the time period of system usage, input data, and people involved in verifying results.

5. Implement transparency measures like providing user instructions for the system, information about characteristics, capabilities, and limitations of system performance, as well as output interpretation tools and a description of mechanisms included within the system.

6. Keep relevant documentation for ten years. That includes:

  • Detailed technical documentation (= audit documentation, according to Annex IV). The content may be streamlined for start-ups and SMEs in the future to decrease the administrative burden.
  • Quality management system documentation
  • Legislative interaction documentation
  • EU declaration of conformity (→ see Annex V)

7. Register your AI system and undergo conformity assessment to obtain the declaration of conformity and according CE marking. This is also necessary for non-European providers placing AI on the EU market, which also implies full compliance to all other requirements. Ensure necessary mitigation actions in case of non-conformity and inform the relevant authority.

8. Demonstrate conformity upon request in an audit. This includes detailed technical documentation and the latest logs of the AI system’s performance.

9. Implement human oversight during AI system use to understand capacities and limitations, to interpret outputs correctly, or to intervene in the system.

10. Implement cybersecurity measures to prevent attacks, ensure system robustness, and prevent failures. Uphold model accuracy and ensure that AI systems that continue to learn are designed to avoid biased outputs that could influence future operations.

If you are looking for an actionable AI governance framework that is aligned with the EU AI Act's obligations, request one here.

How can I start preparing already today?

Complying with the EU AI Act can sound daunting for high-risk companies.

We suggest starting to set up governance processes already today. Organizations utilizing high-risk applications will need to comply with the AI Act by 2026. Risk management, logging, and a comprehensive documentation make sure that everything you develop today can still be used once the regulation is enforced. Additionally, they make your organization more transparent which increases efficiency.

Minimize technical, reputational, and societal risks of high-risk AI systems and increase understanding within your organization and among your customers.

Find out how to take the first steps in AI governance in this article. We are happy to get in touch about helping you set up the right tools and processes to increase AI transparency and documentation quality in your organization. Contact us here.