The European Union has decided to slow down the implementation of its Artificial Intelligence Law before the regulation fully takes effect. The provisional agreement reached between the Council and Parliament delays some obligations for high-risk systems, reduces overlaps with other regulations, and includes a specific ban on tools that generate intimate or sexual images without consent. The technological takeaway is clear: Brussels aims to maintain its role as a global regulator without turning the AI Law into a burden for its industry.
The move comes amid a geopolitical race for AI leadership. The United States advances with a more industry-friendly approach, supported by big tech companies, private investment, cloud services, chips, and public contracts. China regulates content, security, and political oversight of models aggressively, while pushing its national ecosystem at a rapid pace. Europe, in contrast, is trying to forge a third path: strict standards, fundamental rights, and a single market, but with companies that still lack the scale of OpenAI, Google, Anthropic, Microsoft, Meta, Baidu, Alibaba, Tencent, or DeepSeek.
The EU Gains Time to Avoid Slowing Its Own Market
The main change affects high-risk AI systems. Obligations for independent systems, such as those used in employment, education, biometrics, critical infrastructure, justice, migration, or access to essential services, are delayed until December 2, 2027. For AI systems integrated into regulated products, the date moves to August 2, 2028. The official argument is that companies need standards, technical guidelines, and compliance tools before the regulation can be fully enforced.
The agreement also pushes back until August 2, 2027, the requirement for national authorities to establish AI regulatory sandboxes—controlled environments for testing systems under supervision. Additionally, it shortens the timeframe to implement transparency solutions for artificially generated content to three months, with a new deadline of December 2, 2026. In other words, the EU eases the burden on high-risk systems but accelerates the obligation to identify synthetic content.
Another important point is the machinery. The deal prevents duplications between the AI Law and sector-specific regulations for industrial products. This correction is relevant to manufacturers, integrators, and automation providers because it reduces the risk of having to meet similar obligations twice. Critics, however, argue that it could open a pathway to lower controls in sectors where AI decisions have physical impacts.
The EU also introduces a specific ban on AI applications that create non-consensual sexual or intimate images, including so-called “nudifier apps,” as well as on the generation of material involving child sexual abuse. This measure would come into force on December 2, 2026, in response to a visible problem: tools capable of creating fake nude images, sexual deepfakes, and abusive content that cause significant harm to women, teenagers, and minors.
The United States Regulates Less but Concentrates More Power
The comparison with the U.S. is unavoidable. Washington does not have a federal AI law equivalent to the European one. Its model combines executive orders, sectoral regulation, agency actions, public contracts, litigation, and state laws. The White House has strengthened a 2025–2026 approach focused on removing barriers to innovation, avoiding regulatory fragmentation among states, and solidifying U.S. AI leadership.
This approach grants more flexibility to companies. OpenAI, Microsoft, Google, Anthropic, Meta, Amazon, NVIDIA, and other players can deploy models, infrastructure, and services with fewer horizontal obligations than in Europe. This accelerates product development, investment, and enterprise adoption but also concentrates power within a small group of companies controlling models, cloud services, chips, data, distribution, and talent.
Moreover, the U.S. regulatory debate remains open. Some states have passed or proposed their own AI laws, especially concerning automated decisions, minors, employment, health, or consumer issues. Colorado, California, New York, and Texas are examples of state-level regulation activity. The tension lies in whether Washington will allow this patchwork or impose a federal policy that limits states’ ability to tighten rules.
The U.S. is also rapidly integrating AI into defense. The Pentagon has made agreements with major tech companies to use AI systems in classified environments, with applications ranging from logistics and predictive maintenance to information analysis and decision support. This direct relationship between the state, defense, and big tech contrasts with Europe’s slower, more cautious approach.
China Regulates Strictly but Acts Rapidly
China presents a different picture. It lacks a single comprehensive law like Europe’s but has built a layered regulatory framework: rules on algorithmic recommendation, deep synthesis, generative AI services, data security, personal information protection, and synthetic content labeling. Providers of generative models for the public must pass evaluations, register algorithms, and adhere to content, security, and political alignment requirements.
The key difference is that China combines regulatory control with swift industrial implementation. The government sets clear red lines, especially regarding censorship, social stability, national security, and data control, while also pushing its companies to compete in models, chips, applications, robotics, and domestic cloud services. This model isn’t directly transferable to Europe for political and legal reasons, but it demonstrates an operational advantage: when prioritizing a sector, China coordinates funding, permits, domestic markets, and deployment swiftly.
China has also shown the ability to counter U.S. leadership with competitive, more efficient models. The emergence of DeepSeek and other Chinese players has reinforced the idea that regulation and investment alone are not enough—execution, cost reduction, developer ecosystems, and deploying products to real companies matter significantly.
That’s the European dilemma. The EU has created the world’s most influential regulatory framework but has yet to build a comparable industrial AI infrastructure to the U.S. or China. It has talent, research centers, software companies, advanced industry, and regulatory capacity. It lacks more chips, sovereign cloud infrastructure, patient capital, public purchasing, and less fragmentation among member states.
The adjustment to the AI Law attempts to address part of this issue but doesn’t solve it entirely. Delaying obligations can give breathing room to firms and governments, but Europe needs more than just flexible deadlines. It requires its regulation to be supported by computing capacity, high-quality data, proprietary models, cloud and edge infrastructure, funding, and public demand. Without these, the AI Law may end up as a highly sophisticated norm for a market that mainly imports technology designed elsewhere.
The comparison with the U.S. and China leads to an uncomfortable conclusion. The U.S. is willing to tolerate more risk to maintain its leadership. China accepts more state control to speed up technological autonomy. Europe aims to protect rights, competition, and security, but now recognizes that overly burdensome regulation could slow down those still trying to compete.
The good news is that Brussels seems to understand that regulating AI can’t just mean adding obligations. The bad news is that this correction comes when the race is already far advanced. The AI Law remains a global reference, but its success will depend not only on how strict it is but also on whether Europe manages to turn it into a trust-building advantage rather than an entry barrier for its own companies.
Frequently Asked Questions
What has changed in the European AI Law?
The provisional agreement delays some obligations for high-risk systems, avoids overlaps with machinery regulation, adjusts sandbox timelines, and accelerates transparency rules for AI-generated content.
Is Europe softening its regulation compared to the U.S. and China?
Partially. The EU doesn’t eliminate the AI Law but eases and postpones certain elements to reduce administrative burden and prevent compliance from hindering European technological development.
How does the U.S. regulate AI?
The U.S. lacks a comprehensive federal law similar to Europe’s. It relies on executive orders, agencies, public contracts, sectoral regulations, and state laws, with a focus on maintaining industrial leadership.
How does China regulate AI?
China applies a layered model, requiring algorithm registration, security evaluations, content control, and rules on generative AI, deep synthesis, and synthetic content labeling within a broader strategy of national technological autonomy.

