EU AI Act: Delay of AI Rules and New Obligations 2026–2027
Why the delay of the EU AI Act matters for the European AI ecosystem
The European Union is entering a new phase of artificial intelligence regulation after the European Parliament voted to delay the implementation of key parts of the EU AI Act, while final approval from the Council of the EU is still pending.
This delay, which shifts the application of rules for high‑risk AI systems to December 2027, marks a significant shift in the regulatory timeline. At the same time, the EU is introducing new safety and transparency measures, including mandatory labeling of AI‑generated content and new rules targeting deepfake technologies.
With this approach, the EU aims to maintain its ambition to be a global leader in safe and responsible AI, while adopting a more realistic implementation pace.
What is behind the EU’s decision to delay AI rules and soften regulation
The delay in the EU AI Act is the result of a combination of industry pressure, regulatory complexity, and the need to harmonize overlapping digital laws. European companies, from tech startups to large industrial players have warned that overlapping regulations such as the GDPR, DSA, DMA, and the AI Act place them at a competitive disadvantage compared to companies in the United States and Asia.
The European Commission recognized that rapid implementation could slow innovation, increase compliance costs, and push technological development out of Europe. As a result, the timeline is being adjusted, but the regulatory ambition remains intact.
How the delay of high‑risk AI rules affects the market and industry
The most significant change is the postponement of the high‑risk AI rules to December 2027, without a specified day.
The original deadline was August 2026, meaning the delay is roughly sixteen months.
This gives industry additional time to comply with complex requirements involving strict standards for safety, transparency, data governance, human oversight, and technical documentation.
Companies now have more room to build AI governance structures, prepare risk assessments, and implement technical controls required for compliance with the EU AI Act.
Why the question of exempting industrial machinery remains unclear
Earlier interpretations suggested that industrial machinery might be exempt from the EU AI Act, but available sources do not confirm such an exemption. While some industrial systems may fall under other regulations such as the Machinery Regulation, there is no explicit confirmation in preliminary communications. Therefore, this claim cannot be considered official until the consolidated legal text is published.
How the EU AI Act tightens rules for generative AI and deepfake content
Despite delaying some obligations, the EU is introducing new restrictions targeting generative artificial intelligence.
Deepfake regulation is a central focus, but details on the ban of sexualized deepfakes are not yet finalized. According to available information, the ban will primarily target applications that generate sexualized images without consent, including so‑called “nudify” tools.
However, systems with built‑in safety mechanisms may not fall under the ban. Since the exact date of enforcement has not been officially confirmed, cautious wording is necessary.
Why watermarking AI content is essential for transparency and combating disinformation
Mandatory labeling of AI‑generated content using watermarking will take effect in November 2026, which is earlier than previously stated. This measure aims to increase transparency and reduce the risk of disinformation, manipulation, and fake news.
It is particularly important in political campaigns, media environments, and social networks, where generative AI is increasingly used to create convincing but false content. Watermarking will help users, regulators, and platforms more easily identify AI‑generated material.
How the delay of the EU AI Act fits into the broader digital regulatory strategy and the Omnibus VII initiative
The delay is part of the European Commission’s broader initiative to simplify digital regulation, known as Omnibus VII. The goal is to reduce regulatory burdens, especially for small and medium‑sized enterprises, and to strengthen the competitiveness of the European digital sector.
Companies have long warned that overlapping regulations create a complex and costly compliance environment. The Commission recognized the need to adjust the implementation pace to avoid negative impacts on innovation and investment.
Is the delay of the EU AI Act a concession to Big Tech
Although some critics argue that the EU is yielding to pressure from major technology companies, the situation is more nuanced. The delay and administrative simplification do benefit industry, but the EU is simultaneously introducing stricter measures for generative AI, particularly in protecting fundamental rights and preventing misuse. This demonstrates that the EU is not abandoning its ambition to lead in safe and ethical AI regulation, but is instead seeking a balance between protecting citizens and supporting innovation.
How the delay of the EU AI Act will affect companies, developers and the AI market
For companies and developers, the delay provides short‑term relief, but long‑term obligations remain equally demanding. Organizations will still need to conduct risk assessments, ensure data quality, maintain technical documentation, implement human oversight, and prepare for the registration of high‑risk systems in the EU database.
Generative AI will face additional obligations, including watermarking, deepfake detection, and preventing the generation of non‑consensual sexual content. All of this requires investment in AI governance, safety mechanisms, and technical infrastructure.
How the delay of AI rules affects safety, fundamental rights, and user trust
The changes have both positive and negative implications. On the positive side, the EU is introducing clear measures to combat deepfakes and sexualized content, protecting women, children, and vulnerable groups. Transparency of AI‑generated content increases user trust and reduces manipulation risks. However, delaying high‑risk AI rules means that areas such as biometrics, law enforcement AI, healthcare technologies, and critical infrastructure will remain less regulated until 2027, raising concerns among human rights organizations.
The delay changes the timeline but not the Eu AI regulation
The preliminary agreement to soften parts of the EU AI Act is a political compromise, not a retreat. Europe is trying to balance two goals: protecting fundamental rights and fostering innovation. The AI Act remains the strictest AI regulatory framework in the world, but its implementation timeline is being adjusted to ensure it is practical and sustainable. This approach gives industry time to prepare, regulators time to finalize technical standards, and society a path toward a safer and more transparent AI ecosystem.
AI Transparency: This article was written by the author. AI tools were used to support editing and grammar refinement. This article contains AI‑generated images. The final version was reviewed by a human.
Explore related pillars
👉Digital Safety Guide - A framework for privacy and safe digital ecosystems.
👉Neurotechnology Guide - A framework for neural data, BCI technologies, and neuromorphic computing.
🔙 Return to the beginning of the journey
Explore more topics:
Ethics • AI Trends • Neurotechnology • Digital Safety • Brain Science • AI Tools • Technology

