On August 1, the European Union’s Artificial Intelligence Regulation came into effect, a regulation that aims to regulate the use of this technology in the European market. Its main objective is to promote the development and safe use of AI without affecting the fundamental rights of individuals.
However, do we really understand what this regulation implies? According to a survey conducted by Entelgy, 51% of Spaniards claim to be aware of its existence, but only 7.3% understand the impact of this regulation.
In recent years, artificial intelligence has advanced by leaps and bounds, transforming society and creating great opportunities for businesses and citizens. However, these advancements have also raised concerns about privacy, security, and ethics. The proliferation of AI applications without proper oversight has exposed vulnerabilities such as algorithmic bias, discrimination, and lack of transparency.
The same survey indicates that 93.1% of Spanish citizens believe that there is a need for regulation for businesses and individuals regarding the use of Artificial Intelligence.
In this scenario, Entelgy reviews the main aspects of the new European regulation on Artificial Intelligence and how it affects both companies and individuals.
Main objectives of the regulation
The main objective of the AI Regulation is to ensure the safety and fundamental rights of EU citizens against the potential risks of AI. To achieve this, the regulation classifies AI applications into four levels of risk: unacceptable, high, limited, and minimal, with systems that manipulate human behavior or real-time facial recognition in public spaces being the highest risk category, while limited and minimal risk applications are subject to transparency obligations to ensure users are informed when interacting with AI systems.
Scope of application
AI providers in the EU must modify their processes to comply with the new regulation, which involves conducting comprehensive risk assessments and ensuring compliance with technical and transparency requirements, preparing detailed technical documentation, and undergoing compliance audits, as well as incorporating ethical and security principles in all stages of AI system development.
Global implications
This new regulation has a significant and positive impact on EU citizens, as it ensures protection of their fundamental rights, improves transparency and security of AI applications, and builds trust in emerging technologies.
But not only that, the influence of this regulation could extend beyond Europe, setting an international standard for AI regulation. This means that companies operating globally will have to adapt to these new requirements to access the European market, potentially incentivizing the adoption of responsible practices worldwide.
“This regulation represents a crucial step in ensuring that AI applications are developed and used safely and ethically. This will not only strengthen consumer confidence in AI technologies but also promote an environment of innovation. In light of this, companies must facilitate the adoption of the proposed measures and practices,” stated Entelgy.