EU AI Act High-Risk AI: 8 Key Compliance Requirements
Checklist: Key Requirements for Risk Management, Data Governance, Transparency and Incident Reporting
European AI Act Regulation (EU) 2024/1689 introduces the first comprehensive legal framework for artificial intelligence in the EU. It is based on a risk‑based approach and defines four categories of AI systems:
- Prohibited AI (e.g., manipulative techniques, social scoring, biometric categorisation using sensitive data)
- High‑risk AI the most strictly regulated category, covering systems listed in Annex I and Annex III
- Limited‑risk AI requires transparency towards users
- Minimal‑risk AI no specific obligations
High‑risk AI systems must meet strict technical, organisational, and documentation requirements before being placed on the market and throughout their entire lifecycle.
Penalties for non‑compliance can reach up to €35 million or 7% of global annual turnover!
High-Risk AI Obligations Under the EU AI Act
High-Risk Classification
First, identify: AI systems are high-risk if part of regulated products (Annex I, e.g., medical devices, autonomous vehicles) or affecting rights (Annex III, e.g., hiring, credit scoring, biometrics).
1. Risk Management System (Article 9)
Continuous, iterative process: risk identification before development, assessment during development, performance testing, impact on rights, post-deployment monitoring. Goal: safety in real-world conditions.
2. Data Quality (Article 10)
High-quality, representative, unbiased data: discrimination checks, bias mitigation procedures, accuracy, and relevance. Critical for hiring, healthcare.
3. Technical Documentation (Article 11)
Detailed records: functioning, training, data, limitations, maintenance, changes, tests, incidents. Enables regulatory review.
4. Transparency (Article 13)
Inform users: AI interaction, decision-making, limitations, data used, instructions. Builds trust and prevents misinterpretation.
5. Human Oversight (Article 14)
Design for intervention: qualified person can override decisions, clear protocols. Prevents harmful autonomous outcomes.
6. Robustness and Accuracy (Article 15)
Technically robust, error-resistant, secure: tests, stress tests, evaluation, continuous monitoring.
7. Monitoring and Incidents (Article 73)
Continuous performance/anomaly monitoring. Serious incidents (harm, rights, safety): 72h notification + 15 days details to authorities.
8. Pre-Market Conformity (Article 19)
Before market release: conformity assessment, technical verification, EU database registration.
Checklist: High-Risk AI Compliance
1. Classification
[ ] AI inventory + Annex I/III classification
[ ] Documented users/impacts
2. Risks
[ ] Risk framework + review meetings
[ ] Documented assessments
3. Data
[ ] Quality/bias audit
[ ] Fairness metrics + sources
4. Transparency/Oversight
[ ] User info/instructions
[ ] Human-in-the-loop + explainability
5. Monitoring/Documentation
[ ] Dashboard + incident plan (72h/15 days)
[ ] Technical docs archive + tests
Implementation Steps
1. Classify systems (Annex I/III).
2. RMS (Art. 9): framework + assessments.
3. Data (Art. 10): audit + mitigation.
4. Oversight (Art. 13-14): instructions + intervention.
5. Monitoring (Art. 73/15): dashboard + reporting.
Access the full, legally binding text of Regulation (EU) 2024/1689 directly from the Publications Office of the European Union:👉 Download PDF https://op.europa.eu/en/publication-detail/-/publication/d79f3e5d-41bc-11f0-b9f2-01aa75ed71a1/language-en
Explore related pillars
👉Digital Safety Guide - A framework for privacy and safe digital ecosystems.
👉Neurotechnology Guide - A framework for neural data, BCI technologies, and neuromorphic computing.
Read more about AI Ethics & Regulation👇
🔙 Return to the beginning of the journey
Explore more topics:
Ethics • AI Trends • Neurotechnology • Digital Safety • Brain Science • AI Tools • Technology

