First AI Kit: The Easy Way to Prepare for the EU AI Act
- First AI Kit: The Easy Way to Prepare for the EU AI Act
- What is the AI documentation duty?
- What SMEs should actually do
- The Risk Categories
- Minimal Risk (✅ allowed, no real obligations)
- Limited Risk (⚠️ transparency required)
- High Risk (🟠 strict requirements) AI that can impact people’s rights, careers, or well-being.
- Prohibited (⛔ banned outright)
⚡ TL;DR
The EU AI Act is Europe’s first big law to regulate Artificial Intelligence. Finalized in 2024, it starts rolling out from 2025–2026.
- The EU AI Act applies to all businesses, even small ones.
- You don’t need heavy compliance, just a simple AI tool register.
- Keep track of: what tools you use, why, who’s responsible, how you oversee them.
- For most SMEs, tools like ChatGPT or Copilot = Limited Risk → only need transparency, human review, and a basic log.
- If you ever use AI in hiring, healthcare, finance, or education → it becomes High Risk → stricter rules apply.
- Stay safe: no sensitive data in AI, always keep a human in the loop, review your log regularly.
What is the AI documentation duty?
The new EU AI Act (Regulation (EU) 2024/1689) requires every business in the EU that uses AI to keep basic records of their AI use.
Think of it as a logbook for your AI tools:
- Which tools you use (e.g. ChatGPT, Copilot)
- What you use them for
- Who is responsible for them in your team
- How you make sure a human is still in control
For most SMEs, this isn’t a thick compliance binder. It’s a simple table you update now and then — just enough to show that you know what you’re using and that you’re using it responsibly.
(Reference: EU AI Act, Art. 13 “Transparency and information to be provided to deployers”; Art. 50 “Transparency obligations”)
What SMEs should actually do
- Make an AI tool list.
- Write down what AI tools your team is using (e.g. ChatGPT, Copilot).
- Classify them.
- Most will be Minimal or Limited Risk.
- If you see anything that looks like High Risk, stop and get advice.
- Add basic rules.
- Tell staff when they’re using AI, and when to double-check outputs.
- Be clear with customers if an AI is answering them.
- Keep it simple.
- A one-page AI use policy + a spreadsheet/Notion list of tools is enough for now.
The Risk Categories
The Act sorts AI systems into four buckets:
Minimal Risk (✅ allowed, no real obligations)
AI that works in the background - you don’t even think about it.
- Spam filters in your email inbox.
- Excel autofill or grammar suggestions.
- Recommendation engines (“You may also like…”) on e-commerce platforms.
👉 You don’t need to do anything special here.
Limited Risk (⚠️ transparency required)
AI that creates content or interacts with people.
- ChatGPT or Copilot drafting emails, notes, or posts.
- Canva Magic Write or Jasper for social media captions.
- Customer-facing chatbot answering FAQs on your website.
- AI design/image tools like Midjourney or DALL-E.
👉 Safe to use, but you must tell people when they interact with AI, and keep a human reviewing outputs.
High Risk (🟠 strict requirements) AI that can impact people’s rights, careers, or well-being.
- HR tools that screen or rank CVs.
- Loan/credit scoring software that decides if a customer gets financing.
- Healthcare tools that suggest diagnoses or treatment.
- AI in education that grades or assesses students.
👉 If you’re in these areas, you need real governance: documentation, oversight, testing. Most small businesses won’t touch this category.
Prohibited (⛔ banned outright)
AI uses the EU considers too harmful.
- Social scoring (ranking citizens or employees by behaviour).
- Real-time biometric surveillance in public spaces.
- Manipulative AI that exploits vulnerabilities (e.g. AI toys pushing kids to buy things).
👉 These are simply not allowed — for anyone.