Technology and law are becoming increasingly intertwined – there’s no escaping it. For those of us working in tech, reading 200+ pages of legal text can feel like a chore – something best left to the experts. But sometimes, those experts in governance, security, or legal set rules that don’t quite align with reality. That’s when it’s up to us as tech leaders to step in.
To start making sense of the EU AI Act, you need to know that it’s split into two main parts. First up, there’s the recitals – 180 of them! – which outline the reasons behind the law (the ‘why’ of it). Then come the Articles, which lay out the actual rules and their enforcement (the ‘what’ and ‘how’).
Let’s tackle the Recitals first. Here’s a condensed version to save you some time; I’ll cover the Articles in a follow-up post.
A one-paragraph summary for the truly impatient
There are three things to know about EU AI Act if you’re short on time:
- The first is that the EU already has lots of laws, grounded in liberal, western democracy – everything flows from the idea of upholding human rights and dignity.
- It follows that this law has a lot of pointers to those laws. Ultimately, the AI Act takes a position that’s not very different from what you’d expect for any product that puts those core things are at risk. Some rules, some guidelines, a risk model, a certification process, and some bureaucracy.
- Once you’ve got that down, then you’ll understand that your responsibility is to think hard about your product, what it does, and how it might materially affect or be perceived to affect the first thing. At least in this time of discovery, you are the expert.
If you’re a bit more curious, then:
AI: huge upsides, some risk (1–5)
The recitals kick off by saying AI is amazing—useful for health, environment, and economy—but it can also be harmful if misused.
Fundamental rights are king (6, 7, 27)
The Act’s foundation is the EU Charter of Fundamental Rights. AI must uphold human dignity, equality, democracy, and all those good things. It helps to know they’re thinking about seven key principles, even if you can’t remember them.
Common EU rules & consistency with existing rules (7-11, 63, 64, 119)
The objective is a single EU-wide approach. Unity fosters trust and helps the internal market for AI thrive. Established stuff like GDPR’s privacy principles still apply, and use common sense when things overlap (like DSA).
Defining AI (12)
There’s a broad definition covering machine learning, logic-based systems, and anything performing automated inference. A line is drawn between AI and simpler rule-based software.
Deployers, Providers & Reps (13)
This matters when it comes to obligations:
- Provider: The one who puts the AI on the market.
- Deployer: The one using it in real life.
- EU Rep: Non-EU providers need an authorized representative established in the EU.
… skipping(14-23)
skipping these – mostly detailed stuff like biometrics and public space definitions covered elsewhere in this summary.
Military & R&D (24–25)
Military/defence uses fall outside scope (unless you commercialise it). Pure R&D is fine unless you actually market or deploy the system.
Risk levels (26)
The recitals outline a risk-based approach:
- Unacceptable risk = outright prohibited
- High risk = strict requirements
- Lower risk = lighter rules
Prohibited AI (28–33,42,44)
Ban Hammer: manipulative/exploitative AI, social scoring, mass real-time surveillance in public, and invasive emotion sensing in work/education (with narrow exceptions for urgent cases). Basically, no minority report stuff. Exemptions, of course — including anti-terrorism (wink wink).
… skip (34-39)
skipping these too (see Prohibited above and Enforcement below)
Ireland & Denmark (40,41)
They’ve got EU treaty opt-outs for justice stuff, so parts of the AI Act don’t apply. Legal quirks. ¯\(ツ)/¯
High-Risk AI (47–58)
AI systems that can seriously affect people—healthcare, hiring, education, credit, policing, immigration—get labelled “high-risk” with added safeguards. Also, think about it: if we’re training on historical data, and history has not been kind to some, then let’s make sure we don’t carry that forward.
Obligations for high-risk AI (65-96)
- risk management and bias mitigation
- high-quality datasets
- documentation & logs
- human oversight
- accuracy, robustness
- cybersecurity
- post-market monitoring to fix issues
- they don’t outright say it, but ISO 31000, 27001, and 90001
General-Purpose AI (97–118)
Think big, versatile models. Providers must ensure transparency, watch for misuse, respect copyright. If it’s open-source and not “systemic risk,” obligations are less heavy.
CE mark & conformity (122–131, 46)
High-risk systems need a formal check (conformity assessment) before they carry the EU’s “CE” compliance mark. See the Blue Guide.
Transparency for bots & deepfakes (132–137)
If AI mimics humans (like chatbots) or generates super-realistic images/video, it must disclose its AI. The recitals emphasize labelling for the public’s awareness.
AI sandboxes (138–142)
Member States should set up sandboxes to help developers and small businesses experiment safely. This is more about support than actual, physical environments.
SMEs & startups (143–146)
Recitals highlight special support and simplified processes for smaller players. The EU wants them to innovate, not be deterred.
Enforcement & fines (147-157, 168–172)
National market surveillance authorities (plus an EU-level office) check compliance. Violations can trigger hefty fines. Whistleblowers are protected.
Big finish: timeline & reviews (173–180)
Some bans kick in early 2025; most obligations start mid-2026. The Commission will review and adjust things as tech evolves.
In the name of the father, the son, and the EU legal formalities… Amen.
reminder: These summaries refer exclusively to the recitals (the “whereas” statements) of the EU AI Act. The actual binding rules are set out in the main Articles of the Regulation.