Understanding the EU AI Act

What you need to know as a business using AI

Like it or not, the EU AI Act is here. Starting in June 2024, Europe's landmark rules on artificial intelligence will potentially impact every AI use case in your organization if you're dealing with European customers or users.

Even though we haven't released a single independent, competitive European foundation model (no, Mistral doesn't count), we have the first AI regulation.

US innovates, EU regulates? Perhaps. But that’s a discussion for another day. For now, let’s dive into what this EU AI Act is, where it's coming from, and what you need to do.

(Disclaimer: This blog does not contain legal advice!)

Video: Impact of the AI Act

Here's a panel discussion I had recently with Andrea Covic (Acting Head of the Representation of the European Commission in Croatia), moderated by Martina Silov (Executive Director at CroAI) on the impact of the EU AI Act during the 8th European AI Forum:

Understanding the AI Act

To understand the AI Act, we need to understand its origins.

What is AI anyway?

The definition of AI alone has sparked a great deal of debate. I witnessed this firsthand when we discussed our position in a task force within the German AI Association.

Finally, the EU settled on this:

'AI system' means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;

If you think that this sounds like pretty much any computer right now, well then welcome to the exciting world of AI regulation!

But neglect can be costly. The AI Act allows for penalties of up to €35 million, or 7% of a company's global revenue, for non-compliance.

How it all started

In 2020, the European Commission aimed to protect EU citizens from harmful AI systems, similar to how GDPR protects against data misuse.

Just to recap: This was two years before ChatGPT. Visionary political leadership? Not really.

In fact, Generative AI caught regulators a bit by surprise and they had to "hack" it into the existing regulatory logic (which is a major source of confusion, as we'll see below).

The main idea was (and still is) to link how AI is used to the level of risk it poses to EU citizens, and then decide if it needs to be regulated.

The following four risk levels apply:

Key Risk Levels

1) Unacceptable risk / Prohibited AI systems

Certain AI systems are forbidden, particularly those that:

  • Manipulate individuals or exploit their vulnerabilities

  • Assign scores to individuals based on behavior (social scoring)

  • Use biometric real-time identification in public spaces for law enforcement

2) High-Risk AI Systems (HRAIS)

High-risk AI systems include those used in critical infrastructures, educational accreditation, product safety, hiring, migration, democratic processes, and law enforcement. These systems must comply with:

  • Risk Management: Implement thorough risk assessment and mitigation.

  • Data Integrity: Use high-quality datasets to prevent discriminatory outcomes.

  • Transparency and Traceability: Maintain detailed logs and documentation.

  • Oversight and Security: Provide clear information to users and ensure robust security measures.

3) Limited risk

Limited risk AI systems must ensure transparency, such as:

  • Human Notification: Inform people when interacting with AI systems.

  • Content Identification: Clearly label AI-generated content, especially in significant matters.

4) Minimal or no risk

No regulation applies to minimal or no-risk applications, like video games or spam filters – likely the vast majority of all business use cases.

That’s it?

And that would have been the AI Act, if it weren't for this new little technology called Generative AI, which took off in late 2022 and prompted the EU to introduce another checkbox to check off the "should we regulate this" menu:

General Purpose AI (aka “We didn’t see that coming”)

General Purpose AI models (GPAI) include AI technology like Generative AI that can’t be tied to a single use case. You can use ChatGPT to write a poem or build a bomb (if you hack it).

Providers of GPAI services must:

  • Publish technical documentation

  • Comply with EU copyright law

  • Provide a summary of the training data

Businesses using GPAI must map their use cases to the appropriate risk level.

Application

The AI Act applies to companies involved in the development, use, distribution, or import of AI systems in the EU, with some exceptions like scientific research. And military use. (I spare any comments.)

Most rules will apply after a two-year grace period, with exceptions for higher-risk areas:

  • 6 months for prohibited AI systems

  • 12 months for GPAI

  • 24/36 months for high-risk AI systems under Annex II/III

What You Should Do Now

First, don't panic! If you're using AI in day-to-day business processes, it's likely minimal risk and not heavily regulated. Always be transparent about AI use in interactions.

Consider starting a quick audit. The appliedAI institute offers a practical flowchart to navigate the EU AI Act. You can download it here.

Let's do a quick walkthrough!

Step 1: Do I have to do something? (Applicability Check)

Determine if your AI system falls under the AI Act. Know what kind of AI you have and how you're using it.

Step 2: What do I have to do? (Obligations Mapping)

Identify the rules that apply based on the AI System's risk class and your organization's role (provider or deployer). Special rules apply for providers of GPAI models.

Step 3: How do I do it? (Operationalizing Obligations)

Review documentation requirements. This generally does not apply to limited or minimal risk use cases but consider an "AI governance" approach for mature AI use.

Step 4: Look, I did it! (Demonstrating Compliance)

Providers of high-risk AI systems must complete a standard assessment, add a CE safety sign (ha!), and get listed in an EU directory before bringing their system to market. Deployers must also meet specific requirements before using high-risk AI systems.

Step 5: Still doing it (Maintaining Compliance)

Ensure every AI system in use continues to meet all required standards. Actions may be triggered by user-reported malfunctions or significant changes in system use.

Practical Examples

Here are some use cases* to illustrate what the EU AI Act regulates:

  • AI job interview video analysis → High risk → regulated

  • AI predicting staff turnover → High risk → regulated

  • AI job performance monitoring → High risk → regulated

  • AI search for candidates → High risk → regulated

  • AI quality control of manufacturing wires → low risk → not regulated

  • AI predictive maintenance of machinery → high risk → regulated

  • AI demand forecast → low risk → not regulated

  • AI customer support chatbot → low risk → not regulated

  • AI credit scoring for natural persons → high risk → regulated

Conclusion

This overview provides a high-level understanding of the EU AI Act. Dive deeper as needed using the links above.

Bottom line: If you're starting with AI, focus on minimal or limited risk use cases.

Turns out that's what I recommend all the time anyway.

If you need help on this journey, reach out!

See you next Friday!

Tobias

Reply

or to participate.