The EU Artificial Intelligence Act (“AI Act”) is the first comprehensive horizontal legal framework for artificial intelligence. It applies a risk-based approach, prohibiting a small set of AI practices outright, imposing strict obligations on a defined list of high-risk AI systems, layering transparency duties on certain other systems, and creating a dedicated regime for general-purpose AI (GPAI) models including stricter rules for those with systemic risk.
The AI Act applies to providers placing AI systems or GPAI models on the EU market, deployers using AI systems in the EU, and importers and distributors in the supply chain. It also applies to providers and deployers established outside the EU where the output of the AI system is used in the EU. Obligations attach to the role an organization plays and the risk category of the AI system, not to the size or location of the organization.
Software product teams should treat the AI Act as an operational constraint that touches product design, data governance, model development, documentation, transparency, post-market monitoring and procurement. The bulk of substantive obligations apply from 2 August 2026, but prohibited practices and AI literacy duties have applied since 2 February 2025, and obligations for GPAI models and most penalty rules have applied since 2 August 2025. A simplification proposal published by the Commission on 19 November 2025 (the AI Omnibus) is currently in the legislative process and may further adjust the high-risk timeline.
Who does the EU AI Act apply to?
The AI Act addresses obligations to operators along the AI value chain. The role an organization plays determines the obligations that apply:
- Providers that place an AI system or a GPAI model on the EU market or put an AI system into service in the EU under their own name or trademark, regardless of whether the provider is established inside or outside the EU.
- Deployers that use an AI system under their authority in the EU, except where the use is in the course of a personal non-professional activity. Deployer obligations are lighter than provider obligations, but become substantial for high-risk AI systems and for certain transparency-triggering uses.
- Importers and distributors that place AI systems on the EU market or make them available, and authorised representatives appointed by non-EU providers. These actors must verify that providers have completed the relevant compliance steps before the system enters the supply chain.
- Providers and deployers established in third countries where the output produced by the AI system is used in the EU, which extends the AI Act’s reach beyond organizations physically present in the Union.
Software product teams frequently change role without realising it. A team that fine-tunes a third-party general-purpose AI model and integrates it into its own product, or substantially modifies an AI system already on the market, can become a provider in its own right with the full set of provider obligations attached. A team that builds an HR-screening or credit-scoring feature on top of a GPAI model is likely to be operating a high-risk AI system within the meaning of Annex III, even if it considers itself only a software vendor. The risk classification, not the company’s self-description, determines the regime that applies.
What are the most important obligations under the EU AI Act?
The AI Act takes a risk-based approach. The obligations that apply depend on the risk category of the AI system or model and the role of the operator. The principal regimes are:
- Prohibited AI practices (Article 5): a defined set of practices is banned outright, including certain harmful manipulation, exploitation of vulnerabilities, social scoring by public authorities, untargeted scraping of facial images, emotion recognition in workplaces and educational institutions outside narrow exceptions, biometric categorisation inferring sensitive attributes, and real-time remote biometric identification in publicly accessible spaces for law enforcement purposes outside narrow exceptions.
- High-risk AI systems (Articles 6 to 49 and Annex III): providers must implement a documented risk management system, ensure data governance and quality, prepare technical documentation, enable record-keeping (logging), provide transparency and instructions for use, ensure human oversight, achieve accuracy, robustness and cybersecurity, undergo conformity assessment, register the system in the EU database, affix CE marking, and operate post-market monitoring and incident reporting. Deployers of high-risk AI systems have their own obligations, including human oversight, monitoring, and in defined cases a fundamental rights impact assessment.
- Transparency obligations (Article 50): providers and deployers of certain AI systems must inform persons that they are interacting with an AI system (chatbots), label synthetic audio, image, video or text content as artificially generated or manipulated, disclose deep fakes, and inform individuals subject to emotion recognition or biometric categorisation systems.
- General-purpose AI models (Articles 51 to 56): providers must prepare and maintain technical documentation, publish a summary of the training data, comply with EU copyright law, and provide information to downstream providers integrating the model into their AI systems. GPAI models with systemic risk attract additional obligations including model evaluation, systemic risk assessment and mitigation, serious incident reporting, and adequate cybersecurity protection. AI literacy duties under Article 4 require providers and deployers to ensure a sufficient level of AI literacy among their staff.
How is compliance with the EU AI Act supervised and enforced?
The AI Act is a directly applicable Regulation, so no Member State transposition is required. Each Member State must designate at least one notifying authority and one market surveillance authority and communicate a single point of contact to the Commission. Market surveillance authorities supervise compliance in their territory, can require corrective action, withdraw products from the market, and impose national administrative fines. National data protection authorities act as market surveillance authorities for high-risk AI systems used by law enforcement, border control, justice and democratic processes. The European Artificial Intelligence Board, composed of Member State representatives, supports consistent application across the Union, supported by an Advisory Forum and a Scientific Panel of independent experts.
The European AI Office, established within the European Commission, supervises GPAI models directly. It can request information from providers, evaluate models (in cooperation with the Scientific Panel), require mitigation measures for systemic risks, and impose fines on GPAI providers. Notified bodies, accredited under Member State notifying authorities, perform third-party conformity assessments where the high-risk regime requires them. Compliance with harmonised standards adopted under Article 40 creates a presumption of conformity for the relevant requirements, which is the route most providers will take in practice. Codes of practice (notably the General-Purpose AI Code of Practice) provide an interim mechanism for demonstrating compliance pending the adoption of standards.
What are the consequences of (non)compliance with the EU AI Act?
The AI Act has commercial, regulatory and product-level consequences. For software product teams, the practical exposure is rarely confined to fines: it is the combination of market access, customer expectations, conformity-assessment timelines and product withdrawal risk that drives investment in compliance.
License to operate: AI systems that fail to meet applicable AI Act obligations can be required to be brought into compliance, withdrawn from the market, or recalled. Enterprise and public-sector customers in the EU increasingly require evidence of conformity (technical documentation, instructions for use, conformity assessment outcomes, declarations of conformity, registration in the EU database) as a precondition to procurement. Non-compliance translates directly into lost market access in the Union.
Financial valuation: AI compliance posture is now a recurring item in due-diligence questionnaires for investment, acquisition and insurance underwriting. The presence or absence of a documented AI risk classification, a defensible high-risk analysis where relevant, model documentation and a post-market monitoring framework affects deal terms and timelines. Regulatory uncertainty during the phased application of the AI Act materially increases this scrutiny.
Compliance overhead: The AI Act introduces sustained compliance overhead for any organization developing or deploying high-risk AI systems or providing GPAI models. This includes designing, operating and documenting a risk management system, training-data governance, technical documentation, logging, post-market monitoring and serious incident reporting, fundamental rights impact assessments where applicable, and AI literacy programmes for staff. A compliance-by-design approach — building these requirements into engineering, product and data processes from the outset — reduces but does not eliminate this overhead, and is materially cheaper than retrofitting after a market surveillance enquiry.
Investigations: Market surveillance authorities and the AI Office can open inquiries into providers, deployers and GPAI providers. Inquiries can include requests for information, access to training data, model evaluation, on-site inspections and demands for corrective measures. Responding adequately requires capacity from legal, engineering, data science and product teams and can disrupt normal product development while an investigation runs.
Fines/penalties: The AI Act creates three tiers of administrative fines, in each case the higher of a fixed amount or a percentage of worldwide annual turnover for the preceding financial year: up to EUR 35 million or 7% for breaches of the prohibitions on AI practices in Article 5, up to EUR 15 million or 3% for breaches of most other provider, deployer, importer, distributor and notified body obligations, and up to EUR 7.5 million or 1% for supplying incorrect, incomplete or misleading information to authorities or notified bodies. Lower caps apply to small and medium-sized enterprises. The European Commission can impose fines on providers of GPAI models of up to EUR 15 million or 3% of worldwide annual turnover.
Liabilities: AI Act non-compliance interacts with general product liability and data protection regimes. Failures in AI systems that cause harm may trigger claims under the EU Product Liability Directive (Directive (EU) 2024/2853) and national tort law. Where an AI system processes personal data, GDPR obligations apply in parallel and breaches may lead to enforcement action by both data protection authorities and AI market surveillance authorities. Misrepresentation of conformity status can give rise to private-law claims and unfair commercial practices proceedings.
Consolidated publication | AI Omnibus proposal (Nov 2025) | Initial legal act
Informal name: EU AI Act / AI Act
Formal name: Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)
Jurisdiction: The Regulation applies directly across the European Union and does not require transposition into national law. It has significant extraterritorial effect: it applies to providers and deployers established in third countries where the output of the AI system is used in the EU, and to providers placing AI systems or GPAI models on the EU market regardless of where they are established. The Regulation does not affect Member States’ competences in national security or military and defence applications, which are excluded from its scope.
Adoption date: 13 June 2024
Publication date: 12 July 2024 (official publication in OJ L, 2024/1689)
Applicability date(s):
- 2 February 2025: Chapters I (general provisions) and II (prohibited AI practices), and AI literacy duties under Article 4, apply.
- 2 August 2025: governance provisions, obligations for GPAI models (Chapter V), confidentiality rules, and most penalty provisions apply; Member States must have designated their national competent authorities.
- 2 August 2026: the AI Act becomes generally applicable, including the high-risk AI system requirements for systems listed in Annex III, transparency obligations under Article 50, and enforcement powers in respect of GPAI models. Each Member State must have at least one operational AI regulatory sandbox.
- 2 August 2027: high-risk AI systems that are safety components of products or are themselves products covered by the harmonised Union legislation listed in Annex I must comply; providers of GPAI models placed on the market before 2 August 2025 must achieve full compliance.
Enforcement date: The Regulation entered into force on 1 August 2024 (the twentieth day following publication). National enforcement powers and most penalty provisions have applied since 2 August 2025. The Commission’s proposal for an “AI Omnibus” simplification package (COM/2025/837 final, 19 November 2025) would adjust the high-risk timeline by tying application to the availability of harmonised standards and support tools, with proposed long-stop dates of 2 December 2027 for Annex III high-risk systems and 2 August 2028 for product-embedded high-risk systems. The Omnibus is currently in the legislative process and may further evolve before adoption.