This month, European Union legislators formally passed the EU AI Act, a landmark set of requirements aimed at regulating some of the riskiest applications of artificial intelligence. With the move, the EU has leapfrogged the U.S. and the UK to become the first governmental body to develop such a comprehensive set of AI laws. The act’s provisions include:

Risk-based rules

The law sets out to regulate AI applications based on their potential harms to society. High-risk applications will have the most scrutiny. These include applications across elections, healthcare, law enforcement, and critical infrastructure. In contrast, low-risk applications, such as spam filters and content moderation, will have less strict rules.

Transparency mandates for general purpose models

To comply with EU copyright law, companies will have to share detailed summaries of the data used to train their models. Companies will also have to share their models’ energy use and disclose any malfunctions that cause harm to health or property.

Bans on certain applications of AI

Certain uses of AI are outright forbidden under the EU AI Act’s provisions. These include “emotion recognition systems” in schools and applications in predictive policing. The risks of failing to comply can be severe. The European Commission AI Office can enforce penalties of much as 7% of annual global revenue for those it deems in violation of the rules. The EU will also give citizens the right to file complaints against AI developers.

GDPR REDUX

While the EU Act must go through a few formalities before it reaches final approval, it’s already clear that the law’s impact will be significant and extend far beyond the borders of the EU. That’s because the law applies not just to companies headquartered within the EU, but also those whose services are used within the region.

In that sense, the requirements resemble another recent EU law aimed at technology overreach: 2018’s General Data Protection Regulation (GDPR), which applied a similar approach to oversight and enforcement. The GDPR set a high bar for personal data protection laws globally. As a result, many organizations found it more efficient to apply the GDPR standards across their operations globally, rather than enact a more jerry-rigged approach to compliance.

The scope of the EU AI Act and the speed of its development means we’ll likely see other regulatory bodies follow a similar approach to their AI efforts going forward.

REGULATING A MOVING TARGET

Still, while the EU has moved quickly, AI is moving even quicker. Indeed, one of the biggest criticisms of the EU AI Act and similar regulations is that they can’t keep up with the pace of AI development and engagement. Thus, even the timeliest regulation can feel long-in-the-tooth before long.

While regulations such as the EU AI Act are critical to curtailing some of the excesses of unfettered AI development, critics have also pushed back against the impact of the regulation on AI investment and innovation. Marianne Tordeux Bitker, public affairs chief at France Digitale, for example, told Euronews that the act “has a bittersweet taste” and would create new operational challenges for technology companies, particularly EU-based startups.

“We fear that the text will simply create additional regulatory barriers that will benefit American and Chinese competition and reduce our opportunities for European AI champions to emerge,” she said.

IMPLICATIONS BEYOND BIG TECH

While many of new rules largely focus on the technology companies developing foundational models, it’s clear that every organization using or building on top of these platforms must also take note.

For example, under the law’s provisions, deepfakes and other AI-generated media — including video and audio — must be labeled. In addition, users must be told if they are interacting with AI-powered systems like chatbots. This rule has implications for any organization considering how they integrate AI tools into their creative processes.

WHAT BUSINSSES CAN DO

While companies have some time before the entirety of the EU AI Act goes into effect in mid-2026, it’s clear that now is the time to prepare. There are a few steps that every organization should take. Many of these should feel familiar to any leader who navigated the early days of the GDPR era:

Launch an AI compliance task force

A mix of legal, technical, and operations experts, this team would be charged with ensuring that your organization adheres to all existing and emerging frameworks.

Evaluate your AI risks, threats, and opportunities

These include not just internal and external AI risks, but also the reputational and brand risks resulting from noncompliance with regulation.

Develop an internal AI compliance framework

Such a framework will help guide how your organizations develops and applies AI.

Visit Edelman.AI to learn how our AI experts can help you get started.

Josh Turbill is Head of Digital, EMEA.