📖 EU AI Act – What SMEs Need to Know
The EU AI Act is Europe’s new law for Artificial Intelligence. It was finalized in 2024 and will roll out gradually from 2025–2026. Its goal is to make AI safe, transparent, and trustworthy.
🔹 Why it matters for your company
- It applies to any company in the EU that uses or provides AI systems.
- Even if you only use AI tools (not develop them), you must follow certain rules.
- Fines for non-compliance can be high (up to 7% of global turnover), so it’s worth preparing early.
🔹 The Risk Categories
The AI Act groups systems by risk level:
- Minimal Risk (✅ allowed, no obligations)
- Examples: spam filters, AI in video games, chatbots for FAQs.
- No strict rules—just common sense transparency.
- Limited Risk (⚠️ transparency rules)
- Examples: chatbots, content generators (like ChatGPT).
- Users must know they’re dealing with AI. Human oversight is recommended.
- High Risk (🟠 strict requirements)
- Examples: AI in recruitment, credit scoring, healthcare, education.
- Requires risk management, human oversight, data documentation, logging, bias checks.
- Prohibited (⛔ banned)
- Examples: social scoring, real-time biometric surveillance, manipulative AI.
🔹 What SMEs should do now
- List your AI tools and use cases.
- Classify each tool by risk level.
- Check obligations for Limited & High Risk.
- Transparency: disclose AI use to employees, customers, or partners.
- Oversight: keep a human in the loop.
- Documentation: log what tools are used and why.
- Update your policies.
- Add AI rules to your Acceptable Use Policy.
- Keep a Tool Register to track approvals.
→ Use the Notion template provided.
→ Minimal / Limited / High / Prohibited.
🔹 Quick takeaway
👉 Most SMEs will use “Limited Risk” tools (like ChatGPT or Copilot).
You mainly need transparency, human oversight, and safe data handling.
👉 If you plan to use AI in HR, healthcare, or decision-making about people, treat it as High Risk and prepare for stricter obligations.
👉 Avoid any Prohibited uses altogether.