← Previous

19 March 2026 · mesadvocats

Does Your Company Comply with Artificial Intelligence Regulations in 2026? The Legal Guide You Need Before Using AI (and How to Pass a Compliance Audit)

Does Your Company Comply with Artificial Intelligence Regulations in 2026? The Legal Guide You Need Before Using AI (and How to Pass a Compliance Audit)

19 March, 2026

Artificial intelligence / Intellectual property, Sin categoría

Lucas Charnet

!Empresa usando inteligencia artificial: guía legal de cumplimiento y auditoría normativa IA 2026

There is a kind of legal risk that slips in quietly. It doesn’t arrive with a formal notice, a lawsuit, or a takedown request. It arrives earlier—when a company starts using artificial intelligence “like installing a new app”: to draft copy, automate replies, screen candidates, design visuals for campaigns, generate videos, summarize contracts, or even create music and voice-overs. Everything works… until someone asks the uncomfortable question: “Are we compliant?”

In 2026, that question is no longer theoretical. Europe has launched the most ambitious regulatory framework in the world for artificial intelligence. Spain, in addition, has activated supervisory bodies and is translating the European model into real, enforceable compliance. And for the creative sector the impact is twofold: we are not only talking about “AI rules”, but also about intellectual property, copyright, licensing, transparency and traceability. At MES Advocats we see this every week: most companies are not trying to “do something wrong”, but many are using AI without a map—and that is exactly what creates problems.

This article is written so that any business owner, marketing lead, legal counsel, HR manager or cultural/creative producer can understand the landscape and, above all, quickly take stock: where am I, what risks do I have, and when do I need an AI legal compliance audit? If you prefer an audio version, we also cover this in our podcast Autores con Derechos, episode #58.

As we usually do on our blog, the approach is practical and narrative—designed to prevent nasty surprises and turn compliance into a competitive advantage.

AI regulation is not a single law: it is a layered system

One of the most common misconceptions is believing there is a single “AI law” that covers everything. In reality, the framework is layered. First come international principles that set a minimum baseline. Then comes the major European block that harmonizes rules across the internal market. And finally, each State—Spain included—organizes authorities, procedures and supervision mechanisms. The practical result is clear: using AI stops being an “IT issue” and becomes a matter of corporate governance and compliance. The key question is not whether your company “develops” AI, but whether it uses AI in activities that affect people, consumers, data or creative content.

In other words: just as driving is not only about knowing how to accelerate, brake and steer, using AI is no longer only about “knowing how to prompt the tool”. Regulators focus on four concrete points: what you use it for, what data you use, what impact it has, and what controls you have put in place.

International level: AI cannot be a “lawless zone”

At the international level, the most relevant movement is not an endless list of rules, but a consolidating idea: AI must respect human rights, democracy and the rule of law. In Europe, an important milestone has been the Council of Europe’s Framework Convention on AI and fundamental rights, opened for signature on 5 September 2024, precisely to set that shared “baseline”. The goal is not to stop innovation, but to prevent technological deployment from normalizing practices that would be unacceptable in any other context: manipulation, discrimination, opacity, impersonation, or automated decisions without safeguards.

And if we look at the United States, the contrast helps explain why Europe talks so much about “compliance”. There, the major issue tends to be regulatory fragmentation, with approaches that vary across states, sectors and agencies. This matters enormously for any company operating in multiple territories: in Europe, compliance tends to be more uniform thanks to a common regulation; in the U.S., the challenge is often managing differences and adapting policies in a more complex way.

Europe: the AI Act is the new “highway code” for AI

The heart of the European system is the European Artificial Intelligence Regulation, known as the AI Act. And it is worth saying plainly: it is the rulebook for placing, offering and using AI systems in Europe. The European Commission explains it clearly on its official portal, including the timeline and the risk-based approach.

The most important thing about the AI Act is that it does not attempt to regulate AI as one homogeneous block. Instead, it classifies uses by risk level. That avoids the abstract “AI yes / AI no” debate and turns it into concrete questions: are you using AI in a sensitive area? does it affect rights? could it cause discrimination or harm? are you creating content that may mislead the public?

That classification translates into three major practical consequences. First, there are practices the EU considers so dangerous that it bans them outright. Second, there are high-risk uses, where AI can seriously affect people—access to jobs, services, opportunities or safety—and the law requires a package of measures: risk management, documentation, traceability, data quality, human oversight, security controls, and the obligation to be able to explain what the system does. Third, there is a block where the key requirement is transparency, especially when it comes to interaction with automated systems or synthetic content capable of simulating reality.

This is where many companies get surprised. Because “transparency” is not theoretical: in marketing, audiovisual work or social media it can mean that, in certain cases, you must inform users that content is generated or manipulated with AI, or that they are interacting with an automated system. And this does more than avoid penalties: it reduces reputational risk, protects consumers and builds brand trust.

The implementation timeline: what already applies and what is coming soon

Another essential point for any AI audit is the timeline. The AI Act entered into force on 1 August 2024 and its application is phased. The European Commission’s schedule makes this clear: as of 2 February 2025 the bans and the AI literacy obligation started to apply; on 2 August 2025 governance rules and obligations for general-purpose AI models (GPAI) entered into force; and on 2 August 2026 the Act becomes generally applicable, with a longer transition until 2027 for certain high-risk systems integrated into regulated products.

This timeline matters because it dismantles a common myth: “we’ll deal with it later.” No. A company using AI today should already have addressed, at minimum, basic internal training on responsible use, identification of sensitive cases and tool traceability. And if it works with general-purpose models, it should also be aware of European guidance and the “code of practice” designed to facilitate compliance.

General-purpose models and the code of practice: why it affects your company even if you’re not Big Tech

In 2025, the European Commission published a code of practice for general-purpose AI models (GPAI) as a voluntary tool to help the industry meet safety, transparency and copyright-related obligations within the AI Act framework.

Why should a “normal” company care? Because most companies do not train large models, but they do use them: they integrate third-party systems into workflows, generate content with tools built on foundation models, or deploy assistants in customer support. When your provider tells you “we comply”, the sensible follow-up is: how is the model documented? what transparency measures do you provide? how do you manage copyright issues? These are exactly the questions that get formalized in an AI legal audit.

Spain: direct application of the AI Act and supervision through AESIA

In Spain, the starting point is simple: the AI Act applies directly because it is a European Regulation. What Spain must organize is the “how”: authorities, coordination, supervision and operational enforcement. In this institutional rollout, a key milestone is the creation of the Spanish Agency for the Supervision of Artificial Intelligence (AESIA) through Royal Decree 729/2023, published in the Official State Gazette.

For a company, the effect is straightforward: compliance stops being a PDF in a drawer and becomes a reality with interlocutors, criteria and oversight. And, moreover, in Spain the regulatory and social debate around synthetic content, impersonation and deepfakes is intensifying, especially when it impacts personality rights such as honor, privacy and image. Without getting into specific reforms here, the trend is clear: using AI for image and voice increasingly requires diligence and protocols.

Intellectual property: where AI really “touches” the creative business

For production companies, labels, publishers, agencies and content-driven businesses, AI is not only about “complying with the AI Act”. It is a business question: what happens to my works, catalogs and licenses when AI is trained on creative content or generates similar outputs?

Here it helps to distinguish two moments. The first is training. For a system to generate high-quality music, scripts, images or voices, it has usually learned from large volumes of pre-existing content. In Europe, the key legal framework is Directive (EU) 2019/790 (the DSM Directive), which regulates, among other things, text and data mining (TDM) and provides that rightsholders may reserve rights (opt-out) in certain circumstances.

This is not theory. The current discussion focuses on how that opt-out should be expressed effectively and in a “machine-readable” way, and how that impacts model training and content management across creative sectors. Specialized firms and authors are closely analyzing the “opt-out machine” and its growing role in the European ecosystem.

The second moment is the output. An AI-generated output may look original, but it can still incorporate recognizable elements, reproduce patterns that are too close, or mislead the public about authorship, source or identity. And when we talk about voices and faces, the risk is not only copyright: image rights, reputation and, in some cases, impersonation also come into play. That is why, for creative businesses, real compliance is not limited to “using a tool”—it requires governance, contracts and controls.

Conclusion: in 2026 AI is no longer “just technology”—it is compliance (and opportunity)

The right question is no longer whether your company uses AI. The reality is that almost all companies do—sometimes without even calling it AI. The right question is whether it is used with legal certainty: respecting the AI Act’s risk-based approach, applying transparency where required, organizing internal supervision, and paying particular attention to the impact on intellectual property, image and content.

At MES Advocats, we provide comprehensive advice and support in the registration, protection and management of creative works, offering a personalized and efficient service for our clients. Our experience allows us to deliver a fast, tailored service with excellent results. If you need more information or would like a no-obligation quote, please do not hesitate to contact us.

###

![(CAT) El perdó dels deutes de la persona física](https://www.mesadvocats.com/blog/en/el-perdo-dels-deutes-de-la-persona-fisica/ "(CAT) El perdó dels deutes de la persona física")
![(CAT) Els problemes pràctics de la subhasta judicial electrònica](https://www.mesadvocats.com/blog/en/els-problemes-practics-de-la-subhasta-judicial-electronica/ "(CAT) Els problemes pràctics de la subhasta judicial electrònica")

Artificial intelligencePopietat intel·lectual

---