By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

What is the EU AI Act?

The EU AI Act is the world's first legal framework designed to regulate Artificial Intelligence across the EU. It aims to ensure that AI systems are safe, respect existing laws on fundamental rights, and align with the EU's values.

The AI Act introduces a risk-based approach, distinguishing between four kinds of AI systems:

1. Unacceptable

AI applications that are incompatible with EU values and fundamental rights. They will be prohibited.

2. High Risk

Highly regulated AI Systems that could cause significant harm if they are failing or misused, or that are safety components.

3. Limited Risk

Applications that pose a risk of manipulation or deceit. They are less regulated, but have transparency obligations.

4. Minimal Risk

All remaining AI systems. While they have mandatory requirements, transparency and ethical use are encouraged.

Learn how your application classifies

Unsure how you're impacted?

Learn how to cope with the EU AI Act with us in a 15-min call.

Together, we will assess how your organization is affected by the EU AI Act and how your AI system classifies.

trail also offers an AI Governance copilot that helps you in preparing for the AI Act.

trail is supported by leading organizations

What failing to comply with the EU AI Act means

Non-compliance with the requirements of the EU AI Act will result in high penalties:

Up to €35 million or 7% of your global annual revenue for non-compliance with prohibited practices regulation or data requirements.

Up to €15 million or 3% of your global annual revenue for non-compliance with any other regulatory requirements, such as those of high-risk systems or GPAI models.

Up to €7.5 million or 1.5% of your global annual revenue for providing incorrect, incomplete or misleading information about your AI systems.

When does the EU AI Act become enforceable?

The AI Act is currently in its final stage and is expected to enter into force in early 2024. After 6 months, organizations that are putting AI systems into the EU market need to comply with the regulation on prohibited practices. After 12 months, obligations for general purpose AI models (GPAI) become applicable. All other obligations, including those on high-risk AI systems, become applicable after 24 months.

Organizations are expected to meet the first compliance requirements of the EU AI Act at the end of 2024.

What high-risk system providers need to do

Especially high-risk AI system providers, such as in finance or medicine, need to fulfil strict requirements to demonstrate the trustworthiness of their systems.

Learn about all requirements
1.

Quality & Risk Management

Includes risk mitigation and model tests along lifecycle, data governance, keeping detailed documentation and keeping logs.

2.

Conformity Assessment

Undergo self-assessment and third-party audits before putting the AI system on the market.

3.

Registration in EU Database

High-risk system providers need to provide information about their applications, accessible to the public.

[High-risk systems] must also be traceable and auditable, ensuring that appropriate documentation is kept, including of the data used to train the algorithm that would be key in ex post investigations.”

European Commission

Prepared for the EU AI Act

It takes time to meet the high requirements: traceable AI development and documentation, robust risk management and technical audits. Gain a head start on the EU AI Act by preparing now.

With trail you can avoid reputational loss and penalties of up to €35m or 7% of your annual revenue.

This is how trail's AI Governance copilot helps you to comply with the EU AI Act:

Adapt the AI Act through documentation templates to your organization's workflows.

Start With AI Governance

Transfer the regulatory requirements of the EU AI Act into actionable steps already during AI development.

Assessment Call

Assess your risks and governance measures for each AI project before official audit.

Sign-Up Now