Skip to content
OnticBeta
Tier 1 — Regulatory Mandate

EU AI Act (High Risk) — Oracle Source

Publisher

European Parliament and Council of the European Union

Version

v1

Last verified

February 15, 2026

Frameworks

EU AI Act

Industries

Applies to all industries

EU AI Act (High Risk) - Overview

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for regulating artificial intelligence systems, adopted by the European Parliament and Council and published in the Official Journal on 12 July 2024 [cite:132][cite:139]. It entered into force on 1 August 2024, with obligations phased in over a multi-year timeline [cite:167]. The Act follows a risk-based approach, classifying AI systems into four tiers — unacceptable risk (prohibited), high risk (regulated), limited risk (transparency obligations), and minimal risk (unregulated) — with the most extensive obligations falling on providers and deployers of high-risk AI systems [cite:139][cite:138]. High-risk AI systems are defined through two routes: (1) AI systems used as safety components of products or as products themselves under EU harmonisation legislation listed in Annex I (e.g., medical devices, machinery, vehicles), and (2) standalone AI systems operating in eight sensitive domains listed in Annex III [cite:141][cite:136]. The Act applies across all industries and all EU Member States, and has extraterritorial reach to providers and deployers outside the EU whose AI systems are placed on the market or affect persons within the EU [cite:139][cite:179].


EU AI Act (High Risk) - What It Is

The EU AI Act is Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024, laying down harmonised rules on artificial intelligence [cite:139][cite:132]. It is directly applicable across all 27 EU Member States without transposition, except for provisions requiring Member State action (e.g., designation of competent authorities, penalty rules) [cite:167][cite:176].

The Act's high-risk provisions are concentrated in Title III (Articles 6–49), which establishes classification rules, requirements for high-risk AI systems, obligations on providers/deployers/importers/distributors, conformity assessment procedures, and the EU database for high-risk AI systems [cite:139][cite:145]. Key articles include:

  • Article 6 — Classification rules for high-risk AI systems [cite:141]
  • Article 9 — Risk management system [cite:174][cite:168]
  • Article 10 — Data and data governance [cite:171]
  • Article 11 — Technical documentation [cite:165]
  • Article 12 — Record-keeping (logging) [cite:165]
  • Article 13 — Transparency and provision of information to deployers [cite:165]
  • Article 14 — Human oversight [cite:165]
  • Article 15 — Accuracy, robustness, and cybersecurity [cite:165]
  • Article 16 — Obligations of providers [cite:145]
  • Article 17 — Quality management system [cite:189]
  • Article 26 — Obligations of deployers [cite:188][cite:194]
  • Article 43 — Conformity assessment [cite:193][cite:190]
  • Article 99 — Penalties [cite:149]

The European AI Office, established within the European Commission, oversees GPAI model compliance and coordinates enforcement with national competent authorities designated by each Member State [cite:167][cite:176].


EU AI Act (High Risk) - Who It Applies To

The Act creates obligations for multiple roles in the AI value chain, with the heaviest obligations on providers (developers) of high-risk AI systems [cite:145][cite:139].

Providers

Any natural or legal person, public authority, agency, or other body that develops a high-risk AI system or has one developed and places it on the market or puts it into service under its own name or trademark [cite:145]. Providers bear primary compliance responsibility: conformity assessment, CE marking, technical documentation, QMS, post-market monitoring, and incident reporting [cite:145][cite:190].

Deployers

Any natural or legal person, public authority, agency, or other body using a high-risk AI system under its authority (except for personal non-professional activity) [cite:188][cite:194]. Deployers must use systems per instructions, assign human oversight to competent persons, monitor operations, manage input data quality, keep logs for at least six months, and inform affected individuals [cite:188][cite:196].

Importers and Distributors

Importers must verify that providers have completed conformity assessments, affixed CE marking, and drawn up technical documentation before placing systems on the EU market [cite:145]. Distributors must verify CE marking and documentation and ensure appropriate storage/transport conditions [cite:145].

Third Parties Becoming Providers

Under Article 28, any entity that places its name/trademark on a high-risk AI system, makes a substantial modification to it, or modifies its intended purpose to become high-risk, assumes provider obligations [cite:140].

AI Systems Classified as High-Risk

Two classification routes (Article 6) [cite:141][cite:138]:

Route 1 — Annex I (Product Safety): AI systems that are safety components of products — or are themselves products — covered by EU harmonisation legislation in Annex I (e.g., Medical Device Regulation, Machinery Regulation, Toy Safety Directive, Radio Equipment Directive, In Vitro Diagnostic Regulation, Civil Aviation Regulation) AND require third-party conformity assessment under those laws [cite:141][cite:136].

Route 2 — Annex III (Standalone High-Risk): AI systems operating in eight sensitive domains [cite:147][cite:136]:

  1. Biometrics — Remote biometric identification, emotion recognition, biometric categorisation
  2. Critical infrastructure — Safety components of road traffic, water/gas/heating/electricity supply
  3. Education and vocational training — Admissions, assessment of learning outcomes, monitoring of prohibited behaviour during exams
  4. Employment, workers' management, and access to self-employment — Recruitment, screening, promotion/termination decisions, task allocation, performance monitoring
  5. Access to essential private and public services and benefits — Creditworthiness assessment, risk assessment for life/health insurance, emergency services dispatching, benefits eligibility assessment
  6. Law enforcement — Individual risk assessments, polygraphs, evidence reliability assessment, crime prediction
  7. Migration, asylum, and border control — Polygraphs, risk assessments, identity document authenticity, visa/residence/asylum application assessment
  8. Administration of justice and democratic processes — Researching/interpreting facts and law, dispute resolution

Article 6(3) Exception

An Annex III AI system is not high-risk if it does not pose a significant risk of harm and meets any of four conditions: performs a narrow procedural task, improves a previously completed human activity, detects decision-making patterns without replacing human assessment, or performs a preparatory task [cite:138][cite:150]. This exception never applies if the system profiles natural persons [cite:141][cite:150].


EU AI Act (High Risk) - What It Requires - Risk Management System (Article 9)

Providers must establish, implement, document, and maintain a risk management system (RMS) that operates as a continuous, iterative process throughout the entire lifecycle of the high-risk AI system [cite:174][cite:168].

Required Steps

The RMS must comprise [cite:174][cite:168]:

  • (a) Identification and analysis of known and reasonably foreseeable risks to health, safety, or fundamental rights under intended use
  • (b) Estimation and evaluation of risks under intended use and reasonably foreseeable misuse
  • (c) Evaluation of additional risks from post-market monitoring data (Article 72)
  • (d) Adoption of appropriate, targeted risk management measures

Risk Management Measures

Measures must follow a hierarchy of controls (Article 9(5)) [cite:174][cite:168]:

  1. Eliminate or reduce risks through adequate design and development (as far as technically feasible)
  2. Where elimination is not possible, implement adequate mitigation and control measures
  3. Provide information per Article 13 and, where appropriate, training to deployers

Residual risk for each hazard and overall must be judged acceptable [cite:174][cite:133]. The RMS must give consideration to risks to persons under 18 and other vulnerable groups [cite:168]. Providers subject to risk management under other EU law may integrate the AI Act RMS into existing frameworks [cite:168].


EU AI Act (High Risk) - What It Requires - Data and Data Governance (Article 10)

High-risk AI systems using techniques involving training with data must be developed on training, validation, and testing datasets that meet quality criteria [cite:171][cite:121].

Data Governance Practices

Training, validation, and testing datasets must be subject to governance and management practices appropriate for the intended purpose, including [cite:171]:

  • Relevant design choices and data collection processes
  • Data preparation operations (annotation, labelling, cleaning, updating, enrichment, aggregation)
  • Formulation of assumptions regarding the information the data is supposed to measure and represent
  • Assessment of availability, quantity, and suitability of data
  • Examination for possible biases likely to affect health, safety, or fundamental rights, especially for protected groups
  • Identification of relevant data gaps or shortcomings and measures to address them

Bias and Representativeness

Datasets must be relevant, sufficiently representative, and — to the best extent possible — free of errors and complete in view of the intended purpose [cite:139][cite:142]. The Act explicitly permits processing of special categories of personal data (Article 10(5)) to the extent strictly necessary for bias detection and correction, subject to appropriate safeguards under GDPR [cite:134][cite:125].


EU AI Act (High Risk) - What It Requires - Technical Documentation, Logging, and Transparency (Articles 11–13)

Technical Documentation (Article 11)

Providers must draw up technical documentation before the system is placed on the market or put into service, and keep it up to date [cite:139][cite:165]. Documentation must demonstrate compliance with all Section 2 requirements and provide national competent authorities and notified bodies with sufficient information to assess compliance. Content requirements are specified in Annex IV [cite:190].

Automatic Logging (Article 12)

High-risk AI systems must be designed and developed to enable automatic recording of events (logs) relevant to [cite:139][cite:165]:

  • Identifying risks at the national level
  • Facilitating post-market monitoring
  • Monitoring the operation of the high-risk AI system

Deployers must keep logs generated by the high-risk AI system for a period of at least six months, unless otherwise provided by applicable Union or national law [cite:188].

Transparency and Information to Deployers (Article 13)

High-risk AI systems must be designed and developed to ensure sufficient transparency for deployers to interpret and use the system's output appropriately [cite:139][cite:142]. Providers must supply instructions for use that include [cite:165]:

  • Identity and contact details of the provider
  • System characteristics, capabilities, and limitations of performance
  • Intended purpose and foreseeable misuses
  • Changes and predetermined modifications
  • Human oversight measures and implementation details
  • Computational and hardware resources, expected lifetime, and maintenance measures

Deployers must inform natural persons that they are subject to a high-risk AI system's decision or decision-assistance (Article 26(8)) [cite:188][cite:191].


EU AI Act (High Risk) - What It Requires - Human Oversight (Article 14)

High-risk AI systems must be designed to allow effective human oversight during their period of use [cite:139][cite:142]. The objective is to prevent or minimise risks to health, safety, or fundamental rights that may emerge.

Design Requirements for Providers

Human oversight measures must be identified by the provider and built into the system, or identified as appropriate for deployer implementation [cite:165]. The system must enable the person performing oversight to [cite:139]:

  • Fully understand the system's capacities and limitations and monitor its operation
  • Be aware of and guard against automation bias, especially for systems providing recommendations
  • Correctly interpret the system's output, taking into account the specific tools and methods of interpretation
  • Decide not to use the system or to override, reverse, or disregard its output
  • Intervene in the operation of the system or interrupt it via a "stop" button or similar procedure

Deployer Responsibilities

Deployers must assign human oversight to natural persons who have the necessary competence, training, authority, and support (Article 26(2)) [cite:188][cite:196]. In specific high-risk use cases (e.g., biometric identification by law enforcement), special restrictions apply and at least two qualified persons must verify results independently before action is taken [cite:139].


EU AI Act (High Risk) - What It Requires - Accuracy, Robustness, and Cybersecurity (Article 15)

High-risk AI systems must be designed and developed to achieve appropriate levels of accuracy, robustness, and cybersecurity, and to perform consistently throughout their lifecycle [cite:142][cite:165].

Accuracy

  • Accuracy levels and metrics must be declared in the instructions for use
  • Systems must be designed to reduce or prevent errors, including errors in output feedback loops (continuous learning)

Robustness

  • Systems must be resilient to errors, faults, and inconsistencies within the system or its operating environment
  • Technical redundancy solutions (e.g., backup or fail-safe plans) may be required
  • Systems that continue to learn after deployment must be developed to avoid biased outputs due to feedback loops

Cybersecurity

  • Systems must be resilient against attempts by unauthorised third parties to exploit vulnerabilities to alter use, behaviour, performance, or compromise security
  • Technical solutions must address, as appropriate: data poisoning, adversarial examples/perturbations, model inversion/extraction, confidentiality attacks, and model flaws

EU AI Act (High Risk) - What It Requires - Quality Management System and Conformity Assessment (Articles 17, 43)

Quality Management System (Article 17)

Providers must implement a documented QMS ensuring compliance with the Regulation, covering at minimum [cite:189][cite:197]:

  • (a) Regulatory compliance strategy (including conformity assessment and modification management)
  • (b) Design, design control, and design verification procedures
  • (c) Development quality control and quality assurance
  • (d) Testing and validation procedures (before, during, and after development)
  • (e) Technical specifications and standards applied
  • (f) Data management systems (acquisition, collection, labelling, storage, aggregation)
  • (g) Risk management system (Article 9)
  • (h) Post-market monitoring system (Article 72)
  • (i) Serious incident reporting procedures (Article 73)
  • (j) Communication procedures with competent authorities, notified bodies, and stakeholders
  • (k) Record-keeping systems
  • (l) Resource management (including security of supply)
  • (m) Accountability framework defining management and staff responsibilities

Implementation must be proportionate to provider size, but the degree of rigour must ensure compliance [cite:189]. Financial institutions subject to EU financial services law may integrate most QMS aspects into existing internal governance arrangements (Article 17(4)) [cite:189].

Conformity Assessment (Article 43)

Before placing on the market or putting into service, high-risk AI systems must undergo conformity assessment [cite:193][cite:190]:

Annex III, Point 1 (Biometrics): Provider chooses between internal control (Annex VI) — if harmonised standards or common specifications are applied — or third-party assessment involving a notified body (Annex VII). Where harmonised standards are not applied or are incomplete, Annex VII (notified body) is mandatory [cite:187][cite:190].

Annex III, Points 2–8: Internal control (Annex VI) procedure applies — no notified body involvement required [cite:187][cite:195].

Annex I (Product Safety): Providers follow the conformity assessment procedure under the relevant EU harmonisation legislation, with AI Act Section 2 requirements integrated into that assessment [cite:193][cite:190].

Substantial modifications trigger a new conformity assessment, except for pre-determined and documented changes [cite:190][cite:195]. Upon successful assessment, providers draw up an EU Declaration of Conformity (Article 47), affix CE marking (Article 48), and register the system in the EU database (Article 49) [cite:145].


EU AI Act (High Risk) - Governance Implications

The EU AI Act's high-risk provisions create extensive governance obligations across organisational, technical, and operational dimensions [cite:165][cite:118].

Organisational Governance

  • Accountability Framework: Article 17(1)(m) mandates a formal accountability framework setting out management and staff responsibilities across all QMS aspects [cite:189]. This embeds AI governance into corporate structures, not as a voluntary initiative but as a regulatory compliance obligation.
  • AI Literacy: Article 4 requires that providers and deployers ensure staff and other persons dealing with AI systems have a sufficient level of AI literacy, appropriate to their context, technical knowledge, and experience [cite:196][cite:170].
  • National Competent Authorities: Member States must designate notifying authorities and market surveillance authorities (by August 2025), creating direct regulatory touchpoints for each organisation deploying or providing high-risk AI [cite:167][cite:176].

Ontic BOM Mapping

  • model — The AI model is the core regulated artefact. Articles 9–15 impose lifecycle requirements (risk management, accuracy, robustness, cybersecurity) that map directly to model governance: development controls, validation/testing, version control, performance monitoring, and drift detection. Conformity assessment (Article 43) is model-centric [cite:165][cite:190].
  • oracle — Training, validation, and testing datasets (Article 10) are oracles: authoritative data sources whose governance (quality, representativeness, bias examination, lineage) directly determines system compliance. The Act's data governance requirements elevate oracle management to a regulatory obligation [cite:171][cite:121].
  • ontology — Classification and labelling taxonomies (e.g., Annex III domain categories, risk tier definitions, Article 6 classification criteria) constitute the Act's risk ontology. Internally, organisations must maintain ontologies mapping use cases to risk classes, data categories to GDPR bases, and AI outputs to fundamental rights impacts [cite:138][cite:141].
  • system_prompt — For LLM-based or generative AI systems, prompt configuration and system instructions influence output behaviour, accuracy, and safety. Where these affect a high-risk system's performance or decision-making, they fall within the scope of design controls (Article 17(b)), testing/validation (Article 17(d)), and transparency requirements (Article 13) [cite:165][cite:189].
  • gate — Conformity assessment (Article 43) is a pre-market gate: no high-risk AI system may be placed on the EU market without passing it. Post-market gates include incident reporting (Article 73), corrective action (Article 20), and the obligation for deployers to suspend systems presenting risks (Article 26(5)) [cite:190][cite:188].
  • security — Article 15 explicitly requires cybersecurity resilience, including against data poisoning, adversarial attacks, model extraction, and confidentiality attacks. These requirements connect directly to security components of the BOM [cite:142][cite:165].
  • signed_client — Traceability, logging (Article 12), and the obligation to identify providers on the system or documentation (Article 16(b)) support non-repudiation. Deployers must inform individuals subject to high-risk AI decisions (Article 26(8)), linking system identity to client-facing transparency [cite:145][cite:188].

Fundamental Rights Impact Assessment

Deployers that are bodies governed by public law, private entities providing public services, and deployers of certain Annex III systems (creditworthiness, insurance risk, law enforcement, migration, justice) must carry out a fundamental rights impact assessment (FRIA) before putting a high-risk AI system into use (Article 27) [cite:139][cite:127].

E/A/D Axis Integration

E/A/D AxisEU AI Act ArticlesHallmarksEvidence
Ethical (E)Art. 10 (data governance — bias examination, representativeness), Art. 14 (human oversight), Art. 27 (FRIA), Art. 4 (AI literacy)Fundamental rights protection is the Act's primary objective; bias examination of training data is a legal requirement; human oversight ensures meaningful human control; FRIA mandates proactive assessment of impacts on individualsFRIA documentation, bias examination reports, human oversight procedures, AI literacy training records, data governance documentation [cite:171][cite:139]
Accountable (A)Art. 9 (risk management system), Art. 11 (technical documentation), Art. 12 (logging), Art. 13 (transparency), Art. 17 (quality management system)Comprehensive documentation obligations create traceable accountability; risk management system must be established, maintained, and documented throughout the AI lifecycle; QMS covers all aspects of the provider's obligationsTechnical documentation files, risk management records, QMS documentation, transparency disclosures, logging system outputs [cite:165][cite:189]
Defensible (D)Art. 43 (conformity assessment), Art. 73 (incident reporting), Art. 20 (corrective action), EU database registration (Art. 71), post-market monitoring (Art. 72)Conformity assessment is a mandatory pre-market gate providing independent verification; incident reporting creates a regulatory audit trail; EU database registration enables public scrutiny; post-market monitoring demonstrates ongoing complianceConformity certificates, EU database registration records, incident reports, post-market monitoring plans and reports, corrective action documentation [cite:190][cite:188]

EU AI Act (High Risk) - Enforcement Penalties

Enforcement is decentralised: Member States lay down national penalty rules, while the European AI Office enforces obligations for general-purpose AI models [cite:149][cite:143].

Administrative Fine Structure (Article 99)

Violation CategoryMaximum Fine (Entity)Maximum Fine (Company)
Prohibited AI practices (Article 5)€35,000,0007% of total worldwide annual turnover [cite:149][cite:143]
High-risk AI system requirements (operators, notified bodies — Articles 8–49 and related)€15,000,0003% of total worldwide annual turnover [cite:149][cite:137]
Incorrect, incomplete, or misleading information to authorities€7,500,0001.5% of total worldwide annual turnover [cite:149][cite:143]

For SMEs (including start-ups), fines are subject to the lower of the two thresholds (fixed amount or turnover percentage) [cite:149]. Penalties must be effective, proportionate, and dissuasive, taking into account the nature/gravity/duration of the infringement, the number of persons affected, prior violations, the size and market share of the operator, and the degree of cooperation [cite:149].

Aggravating and Mitigating Factors

Article 99(7) lists factors competent authorities must consider [cite:149]:

  • Nature, gravity, and duration of the infringement and its consequences
  • Number of affected persons and level of damage
  • Whether fines have already been imposed for the same infringement
  • Size and market share of the infringing operator
  • Level of cooperation with authorities
  • Previous infringements
  • Degree of responsibility and corrective measures taken

Enforcement Against EU Institutions

EU institutions, bodies, offices, and agencies are subject to the Act and fall under the supervisory competence of the European Data Protection Supervisor (EDPS), who can impose fines up to €500,000 under a separate two-tier structure [cite:137].


EU AI Act (High Risk) - Intersection With Other Frameworks

The EU AI Act operates within a dense regulatory ecosystem and was designed to complement — not replace — existing EU law [cite:125][cite:172].

GDPR (Regulation (EU) 2016/679)

GDPR and the AI Act apply concurrently when AI systems process personal data [cite:125]. Key intersections:

  • Data governance (Article 10) must comply with GDPR's lawful basis, purpose limitation, and data minimisation principles [cite:125]
  • Article 10(5) creates a specific legal basis for processing special category data for bias detection, subject to GDPR Article 9 safeguards [cite:134]
  • Rights under GDPR (access, erasure, rectification, objection to automated decision-making under Article 22) apply alongside AI Act transparency obligations [cite:125]
  • Data protection impact assessments (DPIAs) under GDPR Article 35 overlap with AI Act fundamental rights impact assessments (Article 27) [cite:125]
  • National data protection authorities may also serve as AI Act competent authorities [cite:176]

NIS 2 Directive (Directive (EU) 2022/2555)

Essential and important entities under NIS 2 that develop or deploy AI systems must comply with both cybersecurity requirements and AI-specific risk management frameworks [cite:166][cite:169]. Overlaps include:

  • Incident reporting: NIS 2 requires 24-hour initial notification; AI Act requires serious incident reporting under Article 73 (within 15 days) [cite:169]
  • Supply chain risk management: both frameworks mandate third-party risk assessment [cite:166]
  • AI systems used in critical infrastructure network security, intrusion detection, and fraud prevention must comply with both regimes [cite:166]

EU Product Safety Legislation (Annex I)

High-risk AI systems that are safety components of regulated products (medical devices, machinery, toys, vehicles, lifts, etc.) must satisfy both the AI Act Section 2 requirements and the requirements of the applicable product safety directive/regulation [cite:141][cite:193]. Conformity assessment under the product-specific legislation incorporates AI Act requirements [cite:193][cite:190].

DORA (Regulation (EU) 2022/2554)

The Digital Operational Resilience Act applies to financial entities and requires ICT risk management, incident reporting, and resilience testing. AI systems used in financial services must comply with both DORA's ICT risk framework and the AI Act's high-risk requirements [cite:175]. Article 17(4) permits financial institutions to satisfy most QMS obligations through existing internal governance under financial services law [cite:189].

Cyber Resilience Act (CRA) (Regulation (EU) 2024/2847)

The CRA establishes cybersecurity requirements for products with digital elements. Where an AI system is also a product with digital elements, CRA cybersecurity requirements may apply alongside AI Act Article 15 [cite:175].

FrameworkScopeKey Intersection with AI Act
GDPRPersonal data processingData governance, bias detection, automated decisions, DPIAs [cite:125]
NIS 2Cybersecurity for essential/important entitiesIncident reporting, supply chain risk, critical infrastructure AI [cite:166]
Product Safety (MDR, MR, etc.)Regulated productsJoint conformity assessment, Annex I classification route [cite:193]
DORAFinancial sector ICT resilienceICT risk management, QMS integration for financial institutions [cite:189]
CRADigital products cybersecurityCybersecurity requirements overlap with Article 15 [cite:175]
GPAI Rules (AI Act Title V)General-purpose AI modelsGPAI models integrated into high-risk systems trigger both Chapter V and Title III obligations [cite:139]

EU AI Act (High Risk) - Recent Updates

Implementation Timeline (Key Dates)

DateMilestone
1 August 2024AI Act enters into force [cite:167]
2 February 2025Prohibitions on unacceptable-risk AI practices apply; AI literacy obligations begin [cite:167][cite:170]
2 May 2025Codes of practice for GPAI models to be ready [cite:167]
2 August 2025GPAI model obligations apply; governance/confidentiality/penalty provisions in effect; Member States designate competent authorities [cite:167][cite:176]
2 February 2026Commission deadline for guidelines on practical implementation of Article 6 (high-risk classification) [cite:167]
2 August 2026Core high-risk AI obligations apply (Articles 9–49): risk management, data governance, technical documentation, conformity assessment, CE marking, registration in EU database [cite:136][cite:167]
2 August 2027Remaining provisions fully applicable; Annex I product-embedded high-risk AI must comply; legacy GPAI models (placed before August 2025) must comply [cite:167][cite:136]
2 August 2030Public-authority deployers of high-risk AI systems must have taken compliance steps [cite:167]
31 December 2030Large-scale IT systems in Annex X (placed before August 2027) brought into compliance [cite:167]

Harmonised Standards Development

The European Commission has issued standardisation requests to CEN/CENELEC for harmonised standards supporting the AI Act. Draft standards under development include prEN 18285 (conformity assessment), prEN 18286 (QMS), and standards mapping to Articles 9–15 [cite:190][cite:197]. Application of harmonised standards creates a presumption of conformity [cite:190].

Member State Implementation

As of early 2026, only a small number of Member States have fully designated both notifying and market surveillance authorities; the majority have pending legislative proposals or have appointed only one competent authority [cite:176]. At least one AI regulatory sandbox per Member State must be operational by August 2026 [cite:167].

Commission Guidance and Delegated Acts

The Commission has published draft guidelines on GPAI model obligations (July 2025) and is expected to issue Article 6 implementation guidelines by February 2026 [cite:167][cite:176]. The Commission has power to adopt delegated acts to amend Annex III domain categories and Article 6(3) exception criteria [cite:141][cite:144]. Annual review of prohibitions (Article 5) is ongoing [cite:167].

No Transition Period Delays

The European Commission has confirmed there are no planned transition periods or postponements to the implementation schedule [cite:179]. Enforcement of penalty provisions has been in effect since August 2025 [cite:167]