The EU AI Act will apply to every company in the EU that uses AI — not just big tech. This checklist helps you take the first steps toward compliance. Duplicate it, fill it in, and start building clarity around your AI use.
Not sure about the used terms? Check out the Glossary:
Step 1: List your AI Tools
Inventory all AI tools in use (ChatGPT, Copilot, HR AI, chatbots, etc.).
Note which teams or departments use them.
Step 2: Classify by Risk Category
Minimal Risk → spam filters, games, simple tools.
Limited Risk → chatbots, content generators, translation tools.
High Risk → HR, healthcare, credit scoring, education.
Prohibited → social scoring, biometric surveillance, manipulative AI.
Step 3: Check Transparency
Do employees, customers, or partners know AI is being used?
Are there clear disclaimers or onboarding notes?
Step 4: Assign Human Oversight
Does each tool has a person responsible for reviewing/approving outputs?
Is the oversight process documented?
Step 5: Review Data Handling
Sensitive or personal data use is identified?
GDPR/data protection policies are applied?
Step 6: Track Compliance Actions
Status: Approved / Under Review / Not Allowed.
Last review date recorded.
Next review scheduled.
This checklist should get you started. For a full compliance system with data base, risk categorization, examples and tracking, check out my paid template:
EU AI Act: Tool, Risk & Compliance Tracker (Notion Template + Guideline):