EU Artificial Intelligence Act: Everything You Need to Know [2026]

AI has fundamentally changed how we think about data protection. Read our in-depth guide to the EU's first-of-its-kind AI legislation.
Share post:

What Is the EU Artificial Intelligence Act? 

The EU Artificial Intelligence Act (AI Act) is a legislative framework to regulate the development, deployment, and usage of artificial intelligence across the European Union. It aims to establish clear requirements and obligations to ensure that AI systems are used safely, transparently, and in alignment with fundamental rights. 

The regulation covers a range of AI applications and classifies them based on their potential risk to individuals and society, introducing varying levels of compliance obligations accordingly. Published by the European Commission in 2021, the Act is the first broad, legally binding AI regulatory framework of its kind, with jurisdiction covering both EU-based organizations and external entities offering AI products and services within the EU. 

The AI Act takes a risk-based approach, demanding stricter requirements for high-risk applications, while imposing lighter measures on systems considered to pose limited or minimal risk.

In this article:

Who Does It Apply To? 

The EU AI Act applies not only to organizations based within the European Union, but also to companies outside the EU that provide AI systems or services within the EU market. This extraterritorial scope ensures that all AI systems affecting EU citizens are subject to the same regulatory standards, regardless of the provider’s location.

Specifically, the regulation covers three categories of actors: 

  1. Providers who develop or place AI systems on the EU market.
  2. Users of AI systems operating within the EU.
  3. Importers and distributors of AI technologies entering the EU. This includes both commercial and public sector entities.

For example, a U.S.-based company offering AI-powered recruitment tools to clients in France must comply with the AI Act’s provisions. Similarly, an AI service embedded in a product sold in the EU must meet regulatory requirements, even if the system is developed and hosted outside the region. This global reach is designed to ensure consistent protections for individuals across the EU and to prevent regulatory arbitrage.

Why the EU AI Act Was Created and What Problems It Addresses 

The proliferation of artificial intelligence systems has raised significant concerns about safety, ethics, human rights, and the possibility of AI-driven harms. Practices such as biased decision-making, lack of transparency, misuse in surveillance, and the opaque nature of complex models have contributed to heightened scrutiny. The EU AI Act was created in response to these challenges, aiming to mitigate the risks AI poses to individuals and society.

Beyond risk mitigation, the Act seeks to harmonize the regulatory landscape for AI across the EU, addressing a fragmented regulatory environment that hinders cross-border innovation and market access. It establishes clear legal guidelines on what is permissible, what is required, and what is outright banned, providing concrete, enforceable standards for organizations.

To further support innovation, especially among smaller players, the EU AI Act introduces regulatory sandboxes: controlled environments where AI developers, particularly startups and SMEs, can test their systems under the supervision of competent authorities. These sandboxes allow for real-world experimentation while ensuring compliance with legal and ethical standards.

This approach lowers entry barriers by offering legal clarity early in the development process and helps innovators align with regulatory expectations before deploying their products at scale. By supporting safe and structured testing, the EU aims to encourage responsible innovation without stifling technological progress.

EU AI Act: Risk-Based Classification of AI Systems 

Unacceptable Risk

AI systems in this category are explicitly prohibited under the AI Act due to their potential to cause significant harm or violate fundamental rights. These include:

  • Manipulative or deceptive AI that distorts user behavior using subliminal techniques, impairing decision-making in a way that causes significant harm.
  • AI that exploits vulnerabilities of specific groups, such as children, people with disabilities, or those in socio-economically disadvantaged positions, for manipulative purposes.
  • Social scoring systems that classify people based on behavior or personal traits in ways that lead to unfavorable or unjustified treatment.
  • Biometric categorization systems that infer sensitive attributes like race, religion, or sexual orientation, unless used for lawful dataset filtering or by law enforcement under strict limits.
  • Predictive policing tools that assess the risk of criminal behavior solely based on personality traits or profiling.
  • Systems that scrape facial images from the internet or CCTV footage to build biometric databases without targeted collection and consent.
  • Emotion recognition systems in workplaces or schools, unless used for medical or safety reasons.
  • Real-time remote biometric identification (RBI) in public spaces by law enforcement is banned, except in narrowly defined cases (e.g., searching for missing persons or preventing imminent terrorist threats).

Even in allowed law enforcement cases, strict safeguards apply: a fundamental rights impact assessment, prior authorization, and registration in an EU database are required, with exceptions only for urgent situations subject to follow-up requirements.

High Risk

High-risk AI systems are the primary focus of the AI Act and are subject to detailed regulatory requirements. These systems are either:

  1. Used as a safety component in a product regulated under existing EU product safety legislation (Annex I), and require a third-party conformity assessment, or
  2. Used in high-impact sectors and applications listed in Annex III of the AI Act.

Annex III includes use cases such as:

  • Biometrics: non-verification biometric identification or categorization; emotion recognition.
  • Critical infrastructure: AI managing traffic, energy, or digital infrastructure.
  • Education: systems used for admission, grading, or behavior monitoring.
  • Employment: recruitment, promotion, monitoring, or performance evaluation systems.
  • Essential services: systems assessing eligibility for benefits, healthcare triage, emergency call classification, or insurance pricing.
  • Law enforcement: systems for criminal risk assessment, evidence evaluation, or polygraphs.
  • Migration and border control: used in visa processing or health assessments.
  • Judicial and democratic processes: tools interpreting legal facts or influencing voting behavior.

To be considered high risk, the AI system must process personal data for profiling or be likely to influence decisions with significant legal or similar effects.

Provider obligations include:

  • A risk management system across the lifecycle
  • Governance of training, validation, and testing data to ensure representativeness and quality
  • Technical documentation to demonstrate compliance
  • Systems for automatic logging of events
  • Clear instructions for deployers
  • Design features for human oversight
  • Adequate accuracy, robustness, and cybersecurity
  • A documented quality management system

High-risk rules apply to EU-based and non-EU providers whose AI systems or outputs are used within the EU. Deployers (users) have fewer obligations, but still must ensure appropriate use and monitoring.

Limited Risk

Limited-risk systems are allowed but are subject to transparency obligations. This includes systems that interact directly with humans, such as:

  • Chatbots
  • Deepfakes or other synthetic media generators

The key requirement is that users must be informed when they are interacting with an AI system. No further compliance measures are required under this category.

Minimal or No Risk

AI systems in this category are considered to present negligible risks and are not subject to any specific regulation under the AI Act. Examples include:

  • Spam filters
  • AI in video games
  • Recommendation engines for entertainment

At the time of the Act’s proposal (2021), the majority of AI applications fell into this category. However, some use cases may shift into higher risk categories as AI capabilities evolve, especially generative AI models, which may produce outputs with broader societal impacts.

General-Purpose AI (GPAI)

General-purpose AI refers to models that are trained on large datasets using self-supervised methods and are capable of performing a range of tasks. These models can be deployed directly or integrated into other AI systems, including high-risk applications.

All GPAI providers must:

  • Produce technical documentation, including training and evaluation details
  • Provide downstream providers with clear usage information and limitations
  • Respect the EU Copyright Directive
  • Publish a summary of the datasets used for training

If the GPAI model is released under a free and open license (with accessible weights, architecture, and usage rights), the provider is only required to comply with copyright and publish the dataset summary unless the model presents systemic risk.

A model presents systemic risk if its training used more than 10²⁵ floating point operations (FLOPs). These providers must notify the European Commission and may argue that the model does not present systemic risks. The Commission or its expert panel may confirm systemic risk based on potential high-impact capabilities.

GPAI models with systemic risk must also:

  • Conduct adversarial testing and model evaluations
  • Identify and mitigate systemic risks
  • Track, document, and report serious incidents
  • Ensure cybersecurity protections

Compliance can be demonstrated by voluntarily following a code of practice, which will eventually be replaced by harmonized EU standards. Providers not adhering to these codes must propose alternative compliance mechanisms for approval.

EU AI Act Obligations and Compliance Requirements 

Obligations For Providers/Deployers of High-Risk AI

Providers of high-risk AI systems (those developing or placing the system on the market) are subject to the most extensive obligations under the Act. These include both EU-based and non-EU providers whose systems are used in the EU.

Key obligations for providers (Articles 8–17):

  • Risk management system: Maintain a documented process to identify, assess, and mitigate risks throughout the system’s lifecycle.
  • Data governance: Use training, validation, and testing datasets that are relevant, representative, and as complete and error-free as possible for the intended purpose.
  • Technical documentation: Provide comprehensive documentation to demonstrate compliance and support oversight by authorities.
  • Logging and record-keeping: Enable automated recording of events to track system behavior and modifications.
  • Instructions for use: Provide clear, actionable information to deployers to help them use the system in a compliant and safe manner.
  • Human oversight: Design systems to allow appropriate human monitoring and intervention.
  • Accuracy, robustness, and cybersecurity: Ensure performance reliability under normal and foreseeable conditions.
  • Quality management system: Implement procedures and controls to maintain compliance across development and deployment.

Deployers (users) of high-risk AI (those operating the systems in a professional capacity) also have obligations, though less extensive than providers. These apply both to deployers within the EU and to third-country deployers when the output is used in the EU.

Deployers must:

  • Use the system according to the provider’s instructions
  • Monitor system performance and intervene where needed
  • Ensure that human oversight mechanisms are in place and used appropriately

Obligations For Limited-Risk Systems

Limited-risk AI systems such as chatbots or deepfake generators are allowed under the Act but must meet specified transparency obligations.

Providers and deployers of limited-risk systems must clearly inform users that they are interacting with an AI system. For example:

  • Disclosing when a chatbot is not a human
  • Identifying synthetic or manipulated content (e.g., labeling deepfakes)

No further technical or compliance measures are required beyond this transparency requirement.

Obligations For GPAI Providers

General-purpose AI (GPAI) providers must meet different obligations based on the licensing model (open vs. commercial) and whether the system meets the threshold for systemic risk.

All GPAI providers must:

  • Prepare and maintain technical documentation, including the training and testing processes and evaluation results
  • Provide usage information to downstream providers to enable their compliance
  • Respect EU copyright law
  • Publish a summary describing the datasets used to train the model

Free and open licence GPAI providers (e.g., those releasing model weights, architecture, and usage rights publicly) only need to:

  • Comply with copyright requirements
  • Publish the training data summary

unless their model is classified as systemic.

Systemic risk GPAI models (those trained with over 10²⁵ FLOPs or deemed high-impact by the Commission) face additional requirements:

  • Conduct model evaluations, including adversarial testing
  • Assess and mitigate systemic risks
  • Track, document, and report serious incidents
  • Implement cybersecurity protections

All GPAI providers may demonstrate compliance by adhering to a code of practice, which will eventually be replaced by harmonized standards. Those not using the code must submit alternative compliance mechanisms for approval by the Commission.

Exceptions

There are explicit exceptions in the EU AI Act for AI systems developed solely for research, military, and national security purposes. These categories are exempted from many standard requirements to support innovation and operational effectiveness in strategic domains. Similarly, AI tools developed and used exclusively for personal, non-commercial activities may have lighter or no regulatory obligations.

Despite these exceptions, ethical principles and existing legal requirements on data protection, anti-discrimination, or product safety may still apply. While exemptions are clearly defined, organizations must ensure they do not misuse these categories to inappropriately circumvent regulations. Regulatory authorities have the power to investigate questionable uses and reclassify systems where necessary.

EU AI Act Obligations Summary Table

The following table summarizes the obligations of different entities under the EU AI Act.

CategoryEntityKey Obligations
High-Risk AIProvider– Maintain risk management system- Ensure data governance- Create technical documentation- Enable logging- Provide instructions for use- Design human oversight mechanisms- Ensure accuracy, robustness, and cybersecurity- Implement a quality management system
Deployer– Follow provider instructions- Monitor system performance- Ensure and apply human oversight mechanisms
Limited-Risk AIProvider & Deployer– Inform users they are interacting with AI- Label synthetic content (e.g., deepfakes)
General-Purpose AIAll Providers– Maintain technical documentation- Share usage information with downstream users- Comply with EU copyright- Publish dataset summaries
Free/Open Licence GPAI– Publish dataset summary- Comply with copyright (if not systemic)
Systemic GPAI– Conduct model evaluations- Mitigate systemic risks- Report serious incidents- Implement cybersecurity protections
Exempt CategoriesN/A– Research, military, national security, and personal use may be exempt, but must still follow other applicable laws and ethical norms

EU AI: Interaction with Other Regulations 

The EU Artificial Intelligence Act intersects with several existing and emerging EU laws and regulatory frameworks, requiring organizations to coordinate compliance across multiple regimes.

EU AI Act and data protection (GDPR)

The AI Act explicitly complements and operates alongside the General Data Protection Regulation (GDPR). Both frameworks aim to protect fundamental rights, but they address different concerns: the GDPR focuses on personal data processing and privacy, while the AI Act focuses on the safety, transparency, and risk management of AI systems. 

When an AI system processes personal data, entities must comply with both the GDPR and the AI Act, and overlapping requirements such as risk assessments, documentation, and transparency can often be coordinated. 

The AI Act even refers to GDPR requirements: for example, high-risk AI systems’ conformity assessments must take data protection into account where relevant. Regulatory authorities for data protection in many Member States also serve as market surveillance authorities for the AI Act, amplifying enforcement synergies.

Relation to digital services regulation

The EU’s broader digital rulebook includes laws like the Digital Services Act (DSA), which governs online platform transparency, content moderation, and user protections in digital environments. While the DSA does not regulate AI per se, its transparency and accountability obligations can overlap with AI systems deployed on platforms subject to DSA rules, especially when AI influences content delivery or user interaction.

Intellectual property and copyright

Although the AI Act’s primary focus is on safety and rights protection, it interacts with existing intellectual property frameworks. Providers of general-purpose AI models must respect EU copyright law and publish summaries of their training datasets, balancing innovation and rights protection. Voluntary codes of practice under the AI Act also emphasize compliance with copyright and safety obligations to guide firms on practical implementation.

Product safety and sectoral legislation

The AI Act builds on established EU product safety legislation. High-risk AI systems that function as safety components in regulated products (such as machinery or medical devices) are subject to conformity assessments under both the AI Act and relevant sectoral laws. This layered approach means that organizations must integrate AI risk management with existing product compliance frameworks.

EU AI Act Fines and Penalties 

The EU AI Act introduces a tiered system of administrative fines, with penalties scaled according to the severity of the violation and the size of the organization.

  • Prohibited AI practices: The most serious violations (such as the use of banned systems involving manipulative techniques, social scoring, or unlawful biometric surveillance) can lead to fines of up to €35 million or 7% of the company’s total worldwide annual turnover, whichever is higher.
  • Noncompliance with high-risk AI requirements: For violations involving the use, development, or deployment of high-risk AI systems (such as failing to implement risk management, human oversight, or data quality safeguards) the maximum fine is €15 million or 3% of worldwide annual turnover, whichever is higher.
  • Supplying misleading information: Organizations that provide incorrect, incomplete, or misleading information to supervisory authorities may be fined up to €7.5 million or 1% of global turnover, whichever is higher.

Special provisions apply to start-ups and small or medium-sized enterprises (SMEs). For these organizations, fines are capped at the lower of the two amounts (fixed amount or turnover percentage) specified for each type of violation.

Timeline for Enforcement

The EU AI Act is being rolled out in phases rather than becoming fully enforceable all at once. After publication and entry into force, different parts of the law, including obligations and penalties, take effect on specific future dates tied to the type of requirement and AI system involved.

The AI Act entered into force on 1 August 2024 after its publication in the EU Official Journal, an essential step before any regulatory obligations could begin.

Phased implementation:

  • Early 2025: Initial bans and literacy requirements started applying: for example, prohibited AI practices and basic obligations came into force early in the year.
  • 2 August 2025: Many governance and general-purpose AI (GPAI) obligations became applicable. This includes transparency, documentation, and compliance requirements for GPAI systems.
  • 2 August 2026: The majority of the AI Act’s provisions, especially for high-risk systems, begin to apply, and enforcement powers for many obligations become operational.

Start of penalties and enforcement 

Administrative fines and other sanctions can be imposed once specific obligations are in effect and competent authorities are empowered to enforce them. Most penalties can start being applied from August 2, 2025 onward for governance and GPAI violations, while enforcement of other requirements (such as those for high-risk systems) aligns with their respective application dates, often by mid-2026 or later.

Full compliance horizon

The AI Act’s full rollout extends over several years, with key compliance deadlines continuing through 2027 for high-risk and legacy systems. This staged approach gives organizations time to adapt to complex obligations before enforcement becomes fully effective.

Best Practices to Achieve EU AI Act Compliance 

Here are some of the ways that organizations operating in the EU can ensure compliance with the AI Act.

1. Strengthen Data-Resilience and Auditability Through Robust Backup/DR

To achieve compliance with the EU AI Act, organizations must prioritize resilient data management and recovery strategies. Regular, comprehensive backups and disaster recovery (DR) planning are critical for AI systems, especially those categorized as high risk. Backups ensure data integrity, enable traceability, and provide an auditable history necessary for post-market evaluations or regulatory reviews.

Audit trails and robust recordkeeping are equally important, supporting transparency and accountability throughout an AI system’s lifecycle. Organizations should implement automated logging mechanisms for monitoring dataset changes, system outputs, and decision paths.

2. Establish Formal AI Governance and Accountability

An effective AI governance framework is essential for both compliance and operational success. This involves designating specific leadership and oversight roles to ensure regulatory requirements are met and integrated into all phases of AI development and deployment. Formal governance policies should cover risk management, data quality, security, and ethical standards.

Accountability structures are vital in monitoring ongoing compliance and enabling timely responses to any issues. Regular internal audits, training for developers, and incident response plans should be part of the organization’s compliance toolkit. Documenting governance activities and decision-making processes is also recommended.

3. Inventory and Classify All AI Systems

Organizations must maintain an up-to-date inventory of all AI systems under their control, identifying each system’s purpose, architecture, and risk classification. This inventory forms the basis for applying proper controls consistent with the EU AI Act. By clearly documenting where and how AI technologies are deployed, organizations can assign the appropriate compliance obligations and quickly identify systems subject to stricter regulatory scrutiny.

Regular reviews of this inventory are necessary to capture changes such as model upgrades, repurposing, or new deployments in critical domains. Classification efforts should be consistent, repeatable, and documented, involving cross-functional input from legal, engineering, and data privacy teams.

4. Ensure Robust Data Governance and High-Quality Datasets

Maintaining high standards for data governance is imperative for compliant AI operations. Organizations must document data sources, validate dataset accuracy, and implement bias mitigation strategies as part of their pipeline. High-risk systems in particular demand rigorous dataset audits, ongoing data quality control checks, and mechanisms for safely updating and correcting inputs over time.

Comprehensive control over data lineage, including the methods and rationale behind dataset selection, labeling, and usage, enables traceability and supports regulatory reviews. Data governance policies should be integrated into everyday workflows and reviewed continuously to adapt to both emerging risks and evolving compliance obligations.

5. Build Transparent, Documented Workflows and Technical Documentation

Transparency is a recurring theme throughout the EU AI Act, extending to both internal processes and external communications. Organizations need well-documented technical workflows encompassing model design, data handling, validation, risk assessments, and human oversight implementations. This documentation provides a clear chain of accountability, supports regulatory audits, and demonstrates compliance with the Act’s requirements.

Technical documentation should be updated regularly and made accessible to all relevant stakeholders, including developers, management, and regulators on demand. Workflow transparency helps identify areas for improvement, enables more effective incident response, and fosters trust with both users and oversight bodies.

Supporting EU AI Act Compliance with N2W

Built-in resilience for AI data and systems.

Why Your Current Backup & DR Strategy Falls Short for the EU AI Act

AI has fundamentally changed how we think about data protection.

When enterprises first moved to the cloud a decade ago, the promise was relatively straightforward: simplified operations, elastic scale, and multiple “nines” of availability. Backup and disaster recovery were largely about protecting static applications and predictable data growth.

Today that model no longer holds. AI systems generate continuous, high-velocity data streams and routinely span terabytes or petabytes. When we produce new data every second, it must be instantly protected, governed, and recoverable. Resiliency becomes not only a question of volume, but a question of continuous protection, speed and verifiable recoverability.

But AI also changes the threat landscape in a more unsettling way. The same technologies that power intelligent systems are now being used to attack them. AI makes breaches more precise, scalable, and penetrable. As a result, resilience is no longer just about preventing incidents, but about assuming compromise, testing beforehand and ensuring systems can recover quickly and safely.

The EU AI Act and the Mandate for Resilience

The EU AI Act makes this shift explicit.

Across the regulation, particularly for High-Risk AI systems, the Act emphasizes an enterprise’s obligation to ensure systems can withstand failures, attacks, and faults. In practice, that means designing AI systems with built-in safeguards such as redundancy, healthy failovers, testing and rapid recovery mechanisms.

These are just some of the requirements reference all aspects of data protection:

  • High-quality training, validation, and testing data supported by strong data governance (Article 10 – Data and Data Governance)
  • A documented risk management process across the entire AI lifecycle (Article 9 – Risk Management System)
  • Accuracy, robustness, cybersecurity, and consistent performance over time (Article 15 – Accuracy, Robustness and Cybersecurity)
  • Technical redundancy and fail-safe mechanisms, including backups (Article 15)
  • Resilience against errors, faults, and unauthorized data or system manipulation (Article 15)
  • Comprehensive logging and record-keeping for traceability, audits, and recovery scenarios (Article 12 – Record-Keeping)
  • Detailed technical documentation to demonstrate compliance and operational control (Article 11)

N2W: The Bridge Between AI Data Protection and EU AI Act Compliance

N2W provides the operational foundation that connects modern AI environments with compliance-ready backup and disaster recovery.

Built cloud-native from day one, N2W makes it easy to implement proactive risk-reduction processes, enforce consistent protection policies, and establish the resilience required by the EU AI Act—without slowing down fast-moving AI teams.

The following N2W capabilities help enterprises protect AI data and systems while enabling repeatable testing, reporting, and audit readiness:

Key N2W Capabilities Supporting AI Resilience

  1. Continuous, Policy-Driven Backups
    Protect rapidly changing AI datasets with automated backups scheduled as frequently as your environment demands, even every few minutes to minimize data loss and recovery gaps.
  2. Immutable and Encrypted Backup Copies
    Safeguard AI data with tamper-proof, encrypted backups. Object-lock immutability ensures backups cannot be altered or deleted, even in the event of ransomware or insider threats.
  3. Cross-Region and Cross-Account Protection
    Automatically replicate backups across regions and accounts to eliminate single points of failure and strengthen resilience against regional outages or targeted attacks.
  4. Fast, Predictable Recovery Times
    Snapshot-based recovery enables rapid restoration of AI workloads, helping organizations meet aggressive RTOs and maintain operational continuity.
  5. Automated Disaster Recovery Testing
    Regular, non-disruptive DR drills validate that recovery plans actually work. Each test is documented, providing concrete evidence of resilience for internal teams and external auditors.
  6. Built-In Reporting and Audit Readiness
    Detailed logs and compliance-ready reports provide clear proof of backup coverage, recovery success, and operational controls—no manual effort required.
  7. Tag-Based Automation and Policy Control
    Use tags to automatically apply protection policies across dynamic AI environments, ensuring new workloads and datasets are protected the moment they’re created.
  8. Data Sovereignty and Full Customer Control
    As a cloud-native IaaS solution, N2W ensures AI data remains within your chosen jurisdiction, including the EU. Customers retain full ownership and control of their backups. Data is never accessed by N2W or third parties.

See what AI-ready resilience looks like in practice. N2W on your AI workloads and see how easily you can protect, recover, and prove resilience at scale.

You might also like