← Previous

17 March 2025 · Lucas Charnet

Regulation of Artificial Intelligence in Europe and Spain: Legal Framework and Risk Levels

Regulation of Artificial Intelligence in Europe and Spain: Legal Framework and Risk Levels

17 March, 2025

Data Protection

Lucas Charnet

![](https://www.mesadvocats.com/blog/wp-content/uploads/pexels-cottonbro-6153354-scaled.jpg)

Artificial Intelligence (AI) has evolved rapidly in recent years, transforming multiple sectors and posing significant legal and ethical challenges. Its ability to automate processes, analyze large volumes of data, and make autonomous decisions has sparked a debate on the need for a regulatory framework that balances innovation with the protection of fundamental rights. In response, the European Union has taken a pioneering role by approving the Artificial Intelligence Regulation (AI Act), a key law that establishes obligations and restrictions based on the risk levels of each AI system. In Spain, authorities have begun aligning national legislation with these European provisions, implementing monitoring and control measures.

In this article, we analyze everything you need to know about the new AI regulation in Europe and Spain, the risk levels it establishes, and the challenges it presents for businesses and professionals.

The EU Artificial Intelligence Regulation: A Risk-Based Approach

The AI Act, approved in 2024, establishes a regulatory framework based on classifying AI systems according to their risk level. Four main categories are identified, each with specific implications and restrictions:

Unacceptable Risk

AI systems that pose a clear threat to security, fundamental rights, or democratic values are prohibited. Examples include:

  • Systems that subliminally manipulate human behavior, such as algorithms designed to influence voting decisions without the user’s awareness.
  • Real-time biometric surveillance tools in public spaces without clear legal justification, such as mass facial recognition without consent.
  • Social scoring models similar to China’s system, where citizens receive advantages or restrictions based on their digital or financial behavior.
  • High Risk

    This includes AI applications used in critical sectors that affect security or fundamental rights. Some examples are:

  • Healthcare: AI systems that diagnose diseases or recommend medical treatments must ensure accuracy and transparency.
  • Education: Algorithms used in admission or student evaluation processes, where biased data could impact access to opportunities.
  • Justice and security: Crime prediction tools that could discriminate against certain groups if not properly designed.
  • Human resources: AI systems that filter résumés or evaluate job performance, ensuring they do not generate unfair discrimination.
  • These systems must comply with strict controls, including bias audits, human supervision, and explainability of their decisions.

    Limited Risk

    This category includes tools that require transparency obligations to inform users of their interaction with AI. Examples include:

  • Chatbots and virtual assistants: Such as Siri or Alexa, which must clearly indicate they are AI-based programs and not humans.
  • AI-generated content: Images, videos, or texts created automatically (deepfakes, AI-generated digital art), which must specify their artificial origin.
  • Minimal Risk

    This includes everyday AI systems that do not have a significant impact on rights or security. Examples include:

  • Spam filters in emails, which automatically classify messages.
  • Recommendation systems on platforms like Netflix or Spotify, suggesting content based on user habits.
  • AI in video games, used to enhance player experience without affecting fundamental rights.
  • The regulation imposes significant penalties for non-compliance. Fines can reach up to 7% of the infringing company’s global annual turnover or €35 million, similar to GDPR sanctions, highlighting the importance of compliance.

    The Impact in Spain: Adaptation and Regulatory Strategies

    In Spain, the Spanish Agency for AI Supervision (AESIA) will play a key role in enforcing the AI Act and ensuring compliance with the regulations. AESIA, created to guarantee the ethical and responsible use of AI, will have control and sanctioning functions to ensure that AI systems implemented in the country meet European security and fundamental rights standards.

    The Spanish government has launched the National Artificial Intelligence Strategy (ENIA), a plan aimed at positioning Spain as a leader in developing safe and ethical AI. This strategy includes measures such as investment in R&D, training AI professionals, and promoting public-private collaboration to develop AI tools that boost the economy and improve public services.

    Additionally, specific requirements have been established regarding data protection and cybersecurity. Companies using AI must ensure transparency in their algorithms, prevent discriminatory biases, and comply with the General Data Protection Regulation (GDPR). New obligations have been introduced for impact assessments on fundamental rights in high-risk systems to prevent discrimination or harmful decisions for citizens.

    Challenges and Opportunities for Businesses and Professionals

    AI regulation presents a significant challenge for companies and professionals developing or implementing these systems, as it involves:

  • Strict audits and controls for high-risk systems, requiring companies to establish internal monitoring mechanisms and review algorithmic biases.
  • Transparency obligations for limited-risk tools, ensuring that users are informed when interacting with AI and allowing external audits of its operation.
  • Impact assessments on fundamental rights, particularly in sectors such as finance, healthcare, and human resources, where automated decisions can significantly affect individuals.
  • However, this regulation also brings important opportunities. Standardizing requirements allows companies to develop innovative solutions with greater legal certainty and increased user trust. Additionally, the regulatory framework promotes differentiation for companies that prioritize ethics and transparency in AI, which can become a competitive advantage in the market.

    Investment in regulatory compliance can also become a growth tool, ensuring that AI products and services can expand within the European market without legal risks. Furthermore, collaboration with regulatory bodies and the implementation of best practices can open doors to incentives and public funding for the responsible development of AI.

    At MES Advocats, we offer specialized advisory services on AI regulatory compliance, impact assessments, and legal audits to ensure your company adapts to the new regulatory requirements. Contact us to analyze how this regulation affects your business and ensure effective compliance.

    ###

    ![Can I keep an ex-employee’s email account active?](https://www.mesadvocats.com/blog/en/puc-mantenir-actiu-el-correu-dun-exempleat/ "Can I keep an ex-employee’s email account active?")
    ![International Data Protection Day: How to Improve Your Company’s Privacy](https://www.mesadvocats.com/blog/en/proteccion-datos-empresa-rgpd-medidas-practicas/ "International Data Protection Day: How to Improve Your Company’s Privacy")
    ![Your ID Can’t Be Photocopied: Hotels Must Adapt to the AEPD’s New Interpretation](https://www.mesadvocats.com/blog/en/dni-fotocopia-hoteles-aepd-derechos-cliente/ "Your ID Can’t Be Photocopied: Hotels Must Adapt to the AEPD’s New Interpretation")

    Compliment NormatiuDrets FonamentalsArtificial intelligenceRegulació Legal

    ---