Starting August 2, 2025, the EU’s sanctions regime for the European Artificial Intelligence Regulation (EU AI Act) will come into effect. Despite this regulatory milestone, most people still perceive a lack of control and transparency in the use of these technologies.
The European Union begins a new chapter in tech governance with the enforcement of the EU AI Act, a pioneering legal framework worldwide. From August 2 onward, violating the rules set out by the AI Act could result in fines of up to €35 million or 7% of the company’s global annual turnover.
The clear goal is to ensure the safe, ethical, and responsible use of AI systems—especially those classified as high-risk or significantly impacting fundamental rights. However, this regulatory step reveals a substantial gap between regulator intentions and public perception.
88% of Spaniards Do Not Trust Institutional Control Over AI
According to a survey by tech consultancy Entelgy, only 8.8% of Spanish citizens believe there is currently strict regulation of AI. Additionally, a significant 88.6% feel that institutions do not convey enough security regarding the development and oversight of these systems.
Privacy concerns are also prominent: 80% of Spaniards fear AI systems might collect personal data without proper safeguards, a concern most acute among ages 30-49 (81.4%) and those over 50 (81%).
What Does the New European Regulation Prohibit?
The AI Act classifies AI systems based on risk and explicitly bans practices considered “unacceptable risk,” including:
- Subliminal or psychological manipulation
- Exploiting physical or mental vulnerabilities
- Mass facial recognition in public spaces
- Emotion analysis in workplaces or educational settings
- Social scoring systems
- Crime prediction based on profiling
- Biometric categorization without clear legal basis
These activities are strictly prohibited and subject to direct penalties.
Technical Implications: Transparency, Traceability, and Supervision
From a technological viewpoint, the AI Act requires companies developing and integrating AI to:
- Document internal operations of models
- Ensure human oversight on sensitive decisions
- Clearly inform users when interacting with AI
- Assess and manage risks related to foundational or general-purpose models
Additionally, they must actively cooperate with Spain’s AI Supervisory Agency (AESIA), aligned with national digital transformation and technological sovereignty strategies.
Low Legal Literacy Among Citizens
The Entelgy report highlights a low level of legal awareness: only 11.4% of respondents claim to know current AI laws, rising slightly to 19.3% among those aged 18-29. This lack of knowledge hampers both digital rights advocacy and responsible technology adoption.
Entelgy warns that this limited understanding reduces trust in institutions to protect citizens’ rights and underscores the need for increased transparency and education.
AI and Perception of Risk: A Challenge for the Tech Ecosystem
The AI Act is key to Europe’s strategy for trustworthy AI—distinguishing itself from opaque or poorly auditable models. Yet, widespread distrust shows that legislation alone isn’t enough. A comprehensive approach should include:
- Digital and legal literacy
- Open audits and technical certifications
- Promotion of ethical, transparent, and explainable AI (XAI)
- Citizen participation in AI policy design
What About Tech Companies?
For tech firms, this new framework presents both a challenge and an opportunity. Compliance may involve initial costs for audits, system redesign, or legal validation. However, it also offers a chance to differentiate through transparency, security, and ethics in design.
With the sanctions regime in place, Europe sends a clear message: innovation must be paired with responsibility. For citizens, it’s now time to demand transparency, exercise digital rights, and participate in shaping the AI systems we want to build.
In Summary
Indicator | Key Value |
---|---|
Max fine under AI Act | Up to €35 million or 7% of global revenue |
Citizens believing regulations are strict | 8.8% |
Citizens mistrusting institutional oversight | 88.6% |
Awareness of current AI laws | 11.4% (rises to 19.3% among 18-29) |
Citizens worried about privacy | 80% |
Source: Noticias Inteligencia Artificial