The company rejects the framework proposed by the European Commission, citing legal uncertainty and over-regulation, amid an increasingly tense context between Brussels and Silicon Valley.
In a new sign of friction between the U.S. tech industry and European authorities, Meta has confirmed it will not sign the Code of Conduct for General Purpose AI (GPAI) introduced by the European Commission on July 10. Although the code is voluntary and does not violate any laws, it marks a direct rejection of an instrument designed to facilitate compliance with the AI Act, the ambitious European regulation aiming to govern artificial intelligence across the continent.
Joel Kaplan, Meta’s Vice President of Global Affairs, stated, “Europe is taking the wrong path on AI.” He argued that the code introduces “legal uncertainties for developers” and includes measures that “go beyond the scope of the AI Act itself.” This isn’t the first public criticism from Meta regarding European AI legislation, but this time, the company has chosen to ignore a framework that other industry players see as useful for avoiding sanctions and streamlining regulatory compliance.
What does the GPAI Code of Conduct entail?
The Code of Conduct for General Purpose AI Models is structured into three chapters:
– Transparency: Includes a standardized form to document how a model is trained and functions, in line with Article 53 of the AI Act.
– Copyright: Provides practical guidelines to comply with European copyright laws, especially relevant in an era of massive model training using digital materials.
– Security and Protection: Directed at providers of advanced models with systemic risk, according to Article 55 of the AI Act, defining best practices to mitigate potential negative impacts of these systems.
Although non-binding, the code allows signatory companies to demonstrate compliance with the AI Act more swiftly and with less bureaucratic burden. Signing it offers greater legal predictability and reduces the risk of facing sanctions, which in severe violations could amount to up to 7% of global annual turnover.
Meta, innovation, and rejection of the European model
Meta’s stance should be viewed within a broader strategy to minimize regulatory obligations outside the U.S. Previously, it labeled the AI Act as “excessive” and “counterproductive,” claiming it could delay product releases and hinder innovation.
At the same time, Meta is investing heavily in its “Superintelligence” division, competing with OpenAI and Google DeepMind. This includes high-profile hires and the development of increasingly powerful foundational models. However, without adherence to the European framework, the company risks increased exposure to regulatory investigations, lawsuits, or restrictive measures within the EU—particularly if any of its models are deemed to pose systemic risks.
A geopolitical tug-of-war
The move also carries political implications. The Biden administration, re-elected in January, expressed discontent with the AI Act, calling it a “form of fiscal imposition” on innovation. In April, it actively pressured Brussels to soften or abandon the regulation, aligning itself with major U.S. tech firms.
Conclusion: Cooperation or confrontation?
While companies like Mistral, Aleph Alpha, and even Microsoft are cautiously observing the code and considering joining as a pragmatic step, Meta seems to be betting on a calculated confrontation with European authorities. As global governance of AI enters a critical phase, the refusal of one of the sector’s most influential players might deepen the transatlantic regulatory divide.
The lingering question is: how much can Meta ignore Brussels’ rules without repercussions? And, simultaneously, can Europe set global AI standards without the buy-in of U.S. tech giants? As with AI itself, the answer is still evolving.
via: EU AI Law and AI News