EU AI Regulation Takes Effect: A Milestone in Tech Regulation

This early February 2025 marks a turning point in the history of artificial intelligence (AI) in Europe. On that day, a key part of the European Union Artificial Intelligence Regulation comes into effect, a groundbreaking framework aimed at establishing limits and guarantees in the development and use of this technology. Approved in March 2024, this regulation is the most ambitious legal framework to date for regulating AI, seeking to balance innovation with the protection of fundamental rights.

Artificial intelligence has transitioned from a futuristic concept to an omnipresent tool in sectors such as healthcare, transportation, education, and justice. However, its rapid advancement has posed ethical and legal challenges that require a coordinated response. The EU has taken the lead in this matter, designing a risk-based approach that classifies AI systems into four categories: unacceptable risk, high risk, limited risk, and minimal or no risk.

Prohibitions and Fines: The Bet on Safety

Among the most notable measures is the ban on AI systems deemed to pose unacceptable risks. These include those that violate fundamental rights, infringe on privacy, or present a significant threat to citizen safety. Companies operating these systems could face fines of up to 35 million euros or 7% of their annual revenue, reflecting the seriousness with which the EU addresses this issue.

The regulation also imposes strict obligations on high-risk systems, including those used in critical sectors such as healthcare, transportation, or justice. These systems must comply with standards of transparency, human oversight, and cybersecurity. Additionally, providers are required to document their operations and record their use in a public database, ensuring greater traceability.

Transparency and Citizen Rights

One of the pillars of the regulation is the protection of citizens in their interactions with AI. When an automated decision affects an individual, they have the right to be informed about the use of AI and to request explanations on the basis of the decision. This transparency approach is particularly relevant in sensitive areas such as employment recruitment, credit granting, or public administration.

Furthermore, the regulation strictly governs remote biometric identification systems, allowing their use only in very specific situations and under conditions of rigorous oversight. This measure aims to prevent the indiscriminate use of technologies that could violate privacy or foster discrimination.

Innovation in a Safe Framework

To avoid stifling innovation, the regulation introduces the establishment of sandboxes, controlled environments where companies can test high-risk AI systems under real conditions but with strict monitoring. These settings allow for the evaluation of technology impact and the mitigation of potential risks before market launch.

In Spain, oversight of the regulation will fall to the Spanish Agency for Artificial Intelligence Supervision, a key entity to ensure compliance with the new rules.

A Global Precedent

With the implementation of these measures, the EU positions itself as a leader in artificial intelligence regulation, adopting an approach that combines innovation and security. This framework may serve as a model for other regions of the world seeking to regulate AI responsibly. However, it remains to be seen how this regulation will affect the development of the sector and whether it will achieve the necessary balance between technological advancement and rights protection.

In a world increasingly reliant on technology, the EU Artificial Intelligence Regulation is not just a response to current challenges but also a commitment to a future in which AI serves humanity and does not pose a threat to its fundamental values.

Source: Artificial Intelligence News and DOUE-L-2024-81079 Regulation (EU) 2024/1689 of the European Parliament and of the Council

Scroll to Top