Enters into force the sanctioning regime of the European AI Regulation: fines of up to 35 million for violations

This Friday, August 2nd, marks a turning point in tech regulation in Spain and across the European Union. Starting on this date, the enforcement of the European AI Regulation (EU AI Act) sanctions regime comes into effect, enabling the imposition of significant financial penalties on companies that violate regulations related to the development, deployment, and use of artificial intelligence systems. Fines can reach up to 35 million euros or 7% of the company’s global annual turnover.

This new sanctions framework arrives at a time of social skepticism. According to a survey by Entelgy, The Business Tech Consultancy, only 8.8% of citizens believe there are sufficiently strict regulations currently in place to govern AI. Coupled with this is a concerning distrust in institutions: 88.6% of respondents feel authorities do not provide enough guarantees regarding oversight and supervision of this technology.

Furthermore, the survey highlights a low level of general awareness about existing regulations. Only 11.4% of people claim to be informed about current AI regulations, rising slightly to 19.3% among those aged 18 to 29. These figures underscore the urgent need to promote awareness campaigns and education efforts to bring legislation closer to the public and enhance trust in the ethical and safe development of these technologies.

Privacy is also among the top concerns. Eight out of ten people fear that AI systems may collect personal information without adequate protections. This concern is especially high among those aged 30 to 49 (81.4%) and over 50 (81%).

### Practices Penalized Under the New Regulations

With the enforcement of the sanctions regime, practices considered to pose “unacceptable risk” to fundamental rights and freedoms will be targeted. These include:

– Subliminal or deceptive manipulation
– Exploiting vulnerabilities in specific groups
– Social scoring or social credit systems
– Mass facial recognition in public spaces
– Emotion analysis in workplaces or educational environments
– Biometric categorization of individuals
– Predictive systems that anticipate criminal behavior

Companies using or developing such technologies must exercise extreme caution to avoid violations. Transparency, data traceability during model training, human oversight when necessary, and comprehensive technical documentation will be key requirements.

Additionally, companies will be required to inform users when they are interacting with an AI system. Active collaboration with the Spanish Agency for AI Supervision (AESIA), responsible for ensuring compliance with the regulation domestically, will also be mandatory.

### Moving Toward Safer Development

Although the European regulation has already been approved, its effective implementation with a sanctions framework provides a concrete tool to ensure responsible AI development and use. Organizations will need to carefully review the general-purpose models integrated into their services and implement appropriate safeguards to minimize legal risks.

The challenge extends beyond technical compliance; it also involves restoring public trust, which demands greater protection and transparency in the increasingly prevalent use of AI technologies. The enforcement of the AI regulation’s sanctions regime is a decisive step toward achieving that goal.

Scroll to Top