Skip to content
AI domain for sale, neuralnetwork domain, premium tech domain, buy domain
eu ai act 2026
Ethics

EU AI Act High-Risk AI: 8 Key Compliance Requirements

Mary, NexSynaptic Founder
Mary, NexSynaptic Founder

Checklist: Key Requirements for Risk Management, Data Governance, Transparency and Incident Reporting

 

European AI Act Regulation (EU) 2024/1689 introduces the first comprehensive legal framework for artificial intelligence in the EU. It is based on a risk‑based approach and defines four categories of AI systems:

  • Prohibited AI (e.g., manipulative techniques, social scoring, biometric categorisation using sensitive data)
  • High‑risk AI the most strictly regulated category, covering systems listed in Annex I and Annex III
  • Limited‑risk AI  requires transparency towards users
  • Minimal‑risk AI  no specific obligations

High‑risk AI systems must meet strict technical, organisational, and documentation requirements before being placed on the market and throughout their entire lifecycle.

Penalties for non‑compliance can reach up to €35 million or 7% of global annual turnover!

 The EU AI Act will apply gradually from 2 February 2025, while the key obligations for high‑risk AI systems begin to apply on 2 August 2026, with full implementation of the Regulation expected by 2 August 2027. 
 
 
 

 High-Risk AI Obligations Under the EU AI Act 

 

The European AI Act (Regulation 2024/1689) introduces the strictest regulatory framework for high-risk AI systems those impacting fundamental rights, safety, employment, education, healthcare, or critical infrastructure (Annex I and III).
Non-compliance carries fines up to €35 million or 7% of global annual turnover.
 
Here are the 8 key obligations (Articles 9-15), a practical checklist and implementation steps.



High-Risk Classification

 

First, identify: AI systems are high-risk if part of regulated products (Annex I, e.g., medical devices, autonomous vehicles) or affecting rights (Annex III, e.g., hiring, credit scoring, biometrics).
Create an internal inventory and document criteria.

1. Risk Management System (Article 9)
Continuous, iterative process: risk identification before development, assessment during development, performance testing, impact on rights, post-deployment monitoring. Goal: safety in real-world conditions.

2. Data Quality (Article 10)
High-quality, representative, unbiased data: discrimination checks, bias mitigation procedures, accuracy, and relevance. Critical for hiring, healthcare.

3. Technical Documentation (Article 11)
Detailed records: functioning, training, data, limitations, maintenance, changes, tests, incidents. Enables regulatory review.

4. Transparency (Article 13)
Inform users: AI interaction, decision-making, limitations, data used, instructions. Builds trust and prevents misinterpretation.

5. Human Oversight (Article 14)
Design for intervention: qualified person can override decisions, clear protocols. Prevents harmful autonomous outcomes.

6. Robustness and Accuracy (Article 15)
Technically robust, error-resistant, secure: tests, stress tests, evaluation, continuous monitoring.

7. Monitoring and Incidents (Article 73)
Continuous performance/anomaly monitoring. Serious incidents (harm, rights, safety): 72h notification + 15 days details to authorities.

 8. Pre-Market Conformity (Article 19)
Before market release: conformity assessment, technical verification, EU database registration.



Checklist: High-Risk AI Compliance



1. Classification
[ ] AI inventory + Annex I/III classification  
[ ] Documented users/impacts  

2. Risks
[ ] Risk framework + review meetings  
[ ] Documented assessments  

3. Data
[ ] Quality/bias audit  
[ ] Fairness metrics + sources  

4. Transparency/Oversight  
[ ] User info/instructions  
[ ] Human-in-the-loop + explainability  

5. Monitoring/Documentation 
[ ] Dashboard + incident plan (72h/15 days)  
[ ] Technical docs archive + tests  



Implementation Steps

 

1. Classify systems (Annex I/III).  
2. RMS (Art. 9): framework + assessments.  
3. Data (Art. 10): audit + mitigation.  
4. Oversight (Art. 13-14): instructions + intervention.  
5. Monitoring (Art. 73/15): dashboard + reporting. 
 
 
 I am convinced that these requirements although demanding are a necessary correction to the pace at which AI has been deployed in recent years. They put people back in the spotligh and they will ultimately enhance the safety of AI solutions in Europe.
They place humans, not business at the center of AI development. The Mythos incident is a clear example of why high‑risk AI systems require strict oversight and regulatory safeguards. The biggest challenge, apart from technical compliance, will be a cultural shift: moving from rapid development to responsible development.
 
📄 Download the Official EU AI Act (PDF)

Access the full, legally binding text of Regulation (EU) 2024/1689 directly from the Publications Office of the European Union:👉 Download PDF https://op.europa.eu/en/publication-detail/-/publication/d79f3e5d-41bc-11f0-b9f2-01aa75ed71a1/language-en

Explore related pillars
👉Digital Safety Guide - A framework for privacy and safe digital ecosystems.
👉Neurotechnology Guide - A framework for neural data, BCI technologies, and neuromorphic computing.

 
 Browse all Ethics articles👉 ethics

For a full overview of AI ethics, visit the main Ethics Guide.
https://www.nexsynaptic.com/ethics  
 
 
  

🔙 Return to the beginning of the journey

Explore more topics:
Ethics • AI Trends • Neurotechnology • Digital Safety • Brain Science • AI Tools • Technology

 

 
 
 

 

Share this post