90 days to EU AI Act enforcement: what the corpus says enterprises still haven't done
Ninety-one days to 2 August 2026. The publication has tracked eleven enterprise claims against the EU AI Act enforcement window. Four operational-evidence claims are at material risk of moving to Partial in Q3. The governance-process work is mostly done; the operational-evidence work mostly is not. Articles 9, 12, and 26 require the second.
Holding·reviewed3 May 2026·next+58dOn Sunday 3 May 2026, the EU AI Act’s main enforcement window opens in 91 days. The deadline date is 2 August 2026.
What activates that day is specific. High-risk AI systems listed in Annex III of Regulation (EU) 2024/1689 (biometrics; critical infrastructure; education; employment and worker management; essential public and private services including credit scoring and insurance risk assessment; law enforcement; migration and border control; administration of justice and democratic processes) must be fully compliant with Articles 8 to 27. The Article 50 transparency obligations (AI-interaction disclosure, synthetic-content labelling, deepfake identification) become enforceable. The penalty structure under Article 99 activates: up to €35 million or 7 percent of global turnover for prohibited practices, up to €15 million or 3 percent for high-risk non-compliance, up to €7.5 million or 1 percent for false or misleading information to authorities. Conformity assessment, post-market monitoring, and the market surveillance framework all activate the same day.
What does not activate on 2 August 2026 is also specific. High-risk AI systems embedded in regulated products listed in Annex II (machinery, medical devices, automotive, products covered by existing EU harmonisation legislation) get an extra year, until 2 August 2027, under Article 113(c). Already-on-market grandfathering provisions for AI systems placed on the EU market before 2 August 2026 may apply under Article 111. The prohibited practices and AI literacy obligations have been in force since 2 February 2025; the general-purpose AI model obligations and the governance infrastructure since 2 August 2025.
This publication has tracked eleven enterprise claims against the 2 August 2026 date. As of today, the holding rate on the governance-process claims is high. The holding rate on the operational-evidence claims is dropping. That gap is what this piece is about, and it is the gap the 88 percent struggling-deployer body in the Stanford 2026 dataset actually describes. The structured corpus-wide audit ran most recently in the Q2 2026 review bulletin; this piece is the deadline-anchored slice that complements it.
The new claim added today is AM-127. It forecasts that at least three of the four operational-evidence claims (data residency, audit-evidence assembly, AI-BOM procurement, works council workflow) will move from Holding to Partial or Not holding by 1 October 2026, while at least one of the two governance-process claims (Head of AI Governance role, centralised-versus-federated design) will remain Holding. The verdict is auditable on the Holding-up ledger on that date.
The four operational-evidence claims most at risk
Data residency for AI context windows: AM-108
The claim. AM-108 holds that agentic-AI data residency is not cleanly inherited from existing GDPR cross-border transfer practice. Agent context windows, retrieval indexes, and reasoning traces all create new categories of personal-data processing that have to be located, documented, and (for high-risk Annex III deployments) data-resident inside the EEA before EU AI Act Article 16 enforcement opens on 2 August 2026.
What the corpus predicted. The original data residency piece forecast that enterprises would discover the gap during Q1 2026 procurement reviews and that single-region EEA-resident topology would be the answer for high-risk systems, with hub-and-spoke remaining defensible for general-purpose deployments under documented Chapter V transfer mechanisms.
What is actually happening. The procurement-side discovery happened on schedule for the enterprises that had EU AI Act counsel reading vendor contracts in Q1. It did not happen for the enterprises that delegated EU residency to the cyber team without a separate Article 16 review. The four enterprise-platform vendors all have EU residency offerings of varying scope as of May 2026. Anthropic documents EU data residency for Claude API and enterprise deployments. OpenAI’s ChatGPT Enterprise and the API offer EU data residency, with the question for buyers being whether context retention falls within the residency boundary or only the live request payload. Google Vertex AI Gemini documents EU data residency commitments at the regional-endpoint level. Microsoft Azure OpenAI Service ships EU Data Boundary for the data plane; reasoning-trace handling is the surface buyer-side counsel should verify against the deployer-side Article 16 documentation rather than infer.
What changes between today and 2 August. The vendor-side EU residency offerings are largely there. The deployer-side documentation that Article 16 actually requires (catalogue of personal-data flows, retention rules per data category, named data-protection officer with sign-off, retrieval index residency, reasoning-trace handling) is what is fixable in 91 days. The architectural changes (moving a deployment from hub-and-spoke to single-region EEA-resident) are not fixable in 91 days for a deployment that is already in production at scale; for those, the realistic decision is documented derogation under Article 6(3) or scope reduction.
The national competent authorities that will read the documentation are now identifiable in most member states. The French CNIL has published 2025-2026 guidance on AI personal-data processing under both GDPR and the AI Act. The Dutch Autoriteit Persoonsgegevens published its own AI supervisory framework in early 2026 and is the named market-surveillance authority for the Netherlands. The German BfDI coordinates with the federal-state data-protection commissioners on AI Act enforcement; the actual market-surveillance authority varies by Bundesland. The deployer documentation that satisfies one of the three should be readable by the others without rewriting; that consistency is what the central function in a hybrid governance model owns.
What to ask your vendor in the next 30 days.
- Where is the context window for my high-risk Annex III deployment physically held, by region and by data-centre operator, in writing?
- Does my deployment’s reasoning-trace capture (where available) fall within the EU Data Boundary or outside it?
- What is the documented retention period for context, retrieval index, and reasoning trace separately?
- Will you sign a data-processing addendum that names Article 16 explicitly?
Article 12 audit-evidence in under four hours: AM-046
The claim. AM-046 holds that EU AI Act Article 12 (record-keeping for high-risk AI systems) and Article 19 (record retention by providers) are operationalised for agentic AI by a 14-field audit-evidence template that captures every agent decision in a regulator-queryable form. Logs retained for the regulatory minimum (typically six months for the EU AI Act baseline, five to seven years for sector-specific overlays like HIPAA and SOX) in a queryable format that supports under-four-business-hour evidence assembly.
What the corpus predicted. The Article 12 audit-evidence piece forecast that enterprises building this post-hoc would discover three operational gaps: that their existing log infrastructure does not capture the right fields, that reasoning-trace capture is vendor-dependent and not always available, and that there is no off-the-shelf SIEM mapping for Article 12.
What is actually happening. All three gaps are now visible at scale. The fourteen template fields (deployment ID, agent identity, session ID, ISO timestamp, user prompt, retrieved context with provenance, model output, planned action, action class, approval reference, executed action, tool-call audit chain, output disclosure surface, policy version) only line up with existing application-log schemas for two or three of the fourteen. The reasoning-trace gap is real: most production deployments cannot replay the model’s intermediate steps even when their vendor offers it, because the deployer pipeline did not capture the trace ID at submission time. Enterprises that defer this work until June discover that retrofitting trace capture to a production deployment is a four-to-eight-week project, not a four-day project.
What changes between today and 2 August. What is fixable in 91 days is field-level capture for the next two months of new agent activity, plus a documented assembly playbook that names the ten fields that exist somewhere and the four that do not, with the manual reconstruction path for the missing ones. What is not fixable in 91 days is full retroactive trace capture for deployments shipped in late 2025. The honest posture for those is documented gap analysis with the deployer-side justification under Article 9 risk-management methodology, rather than claimed compliance the audit will not support.
The Article 19 retention clock is the second variable enterprises mis-handle. Article 19 sets a six-month minimum retention for the Article 12 records by the provider; sector-specific overlays (HIPAA at six years, SOX at seven years, the financial-services CRD V at five years) override upward. The deployer choice between log-as-cold-storage and log-as-queryable changes the cost line by an order of magnitude. The four-business-hour assembly target only makes sense if the queryable substrate is online; cold-storage retrieval against a five-year archive does not hit the target without an investment in indexed metadata that very few deployers have made. The procurement question is whether the SIEM contract under renewal accommodates AI-specific log volumes at queryable retention rather than only at cold-storage retention.
What to ask your vendor in the next 30 days.
- Do you ship reasoning-trace capture as standard? If so, what API field carries the trace identifier and what is the retention default?
- Does your platform’s audit log expose the action-class taxonomy required by Article 12, or only generic event logs?
- What is your timestamp granularity (millisecond, second, request-level)?
- Will you provide a written commitment to a four-business-hour evidence-assembly SLA against your platform’s audit API?
AI Bill of Materials as procurement requirement: AM-117
The claim. AM-117 holds that AI Bill of Materials (AI-BOM) is moving from optional security artefact to enforceable procurement requirement in 2026, driven by EU AI Act Article 11 plus Annex IV technical-documentation requirements (effective 2 August 2026) and the CycloneDX ML-BOM and SPDX 3.0 specifications. Enterprise SBOM programmes need three specific extensions for AI components.
What the corpus predicted. The AI-BOM procurement piece forecast that enterprises that built their SBOM compliance against the executive-order-driven US framework would be blindsided when the EU procurement asked for a different inventory shape (model provenance, training-data lineage, fine-tuning history, evaluation evidence). The piece named CycloneDX ML-BOM 1.5 as the canonical specification and SPDX 3.0 AI extensions as the parallel standard.
What is actually happening. The blindsiding is happening on schedule. As of May 2026, none of the four enterprise-platform vendors publicly commit to shipping a CycloneDX ML-BOM 1.5 artefact as a standard deliverable with their enterprise deployments; primary documentation naming the artefact format was not located on any of the four trust-centre or product-documentation pages as of this date. The orchestration-layer vendors (ServiceNow Now Assist, Salesforce Agentforce, Microsoft Copilot Studio) reference Article 11 documentation in their EU AI Act compliance posture but the published material stops short of naming a CycloneDX or SPDX artefact in any version. The gap is the procurement deliverable: an enterprise whose buyer-side counsel asks for “the CycloneDX ML-BOM artefact for this deployment in version 1.5 format” gets a thoughtful answer, not a file. That gap is what closes between today and 2 August in the deployments where it closes; in the rest, it is the structural reason the claim moves to Partial in Q3.
What changes between today and 2 August. What is fixable is the contract-language extension. Procurement contracts can be amended at renewal to require a CycloneDX ML-BOM 1.5 artefact as a delivery condition, with the vendor’s gap explicitly named in the contract if they cannot deliver. What is not fixable in 91 days is unilateral vendor adoption of the standard. The 2026 reality is that buyer-side pressure has to lead the standard’s adoption; the procurement contract is the lever.
The CycloneDX-versus-SPDX question recurs in procurement reviews and is worth resolving once. CycloneDX ML-BOM 1.5 is the more mature artefact for AI components specifically (model cards, dataset references, evaluation evidence, fine-tuning provenance). SPDX 3.0 with the AI extensions is the format more enterprises already use for their broader software supply chain. Either is acceptable to the regulator under Article 11 plus Annex IV; what is not acceptable is no artefact at all, or a vendor-proprietary inventory format that the deployer cannot ingest into the existing SBOM substrate. The pragmatic 2026 posture for enterprises with mature SBOM programmes is to require SPDX 3.0 with AI extensions as the default and accept CycloneDX ML-BOM 1.5 where the vendor’s pipeline produces it natively. Buyer-side counsel should specify either format in the master agreement to keep the leverage at the contract level rather than at the operational level.
What to ask your vendor in the next 30 days.
- Do you ship a CycloneDX ML-BOM 1.5 artefact with this enterprise deployment? Yes or no.
- If yes, can you send me the artefact for a representative deployment so my buyer-side counsel can verify the field coverage against Annex IV?
- If no, will you commit in the master agreement to ship one within 12 months, and what are the contractual remedies if you do not?
- What is your training-data provenance documentation: is it model-card narrative, dataset-level cite list, or token-level provenance trace?
Works councils and the EU rollout chokepoint: AM-120
The claim. AM-120 holds that AI agent deployments touching employee work in EU jurisdictions with co-determination law (Germany BetrVG §87, Netherlands WOR Article 27, France CSE provisions) require works council consent before activation in 2026. Most US-headquartered AI vendors lack a customer-success workflow for this, producing a class of stalled rollouts that read as “vendor delay” but are actually compliance gaps. Total EU-site timeline from selection to production is 6 to 9 months when handled well, 12 to 18 when consultation begins late.
What the corpus predicted. The works council piece forecast that the German Betriebsverfassungsgesetz §87(1) co-determination right would be the first surface where the pattern showed up at enterprise scale, with the Federal Labour Court interpreting “technical systems designed to monitor employee behaviour or performance” broadly enough to cover any AI assistant or agent that touches employee work.
What is actually happening. The forecast is holding. The German pattern is the dominant EU surface in 2026, with Dutch WOR Article 27 (specifically subsections 1(k) and 1(l) covering personnel registration systems and workplace monitoring) and the French CSE consultation rights producing parallel patterns at lower volume. The vendor-side situation has not improved materially: as of May 2026, no major US-headquartered enterprise-AI vendor publishes a documented customer-success workflow that names BetrVG §87, WOR Article 27, and the French CSE consultation timeline at the level of operational specificity an EU works council chair would accept. The orchestration-layer vendors (ServiceNow, Microsoft Copilot Studio, Salesforce Agentforce) have begun adapting their EU customer-success organisations to the consultation reality, with the depth and consistency varying by region; primary documentation of a complete consultation playbook was not located on the public-facing enterprise documentation as of this date. Where vendors do not have a workflow, the procurement deliverable is the deployer-side bridge: the documented consent procedure, the named consultation timeline, the named labour-law counsel.
What changes between today and 2 August. What is fixable in 91 days is documenting the works council consent workflow on the deployer side for every EU jurisdiction the enterprise operates in. The artefact is the documented consent procedure, the named consultation timeline, and the named labour-law counsel. Where vendors do not have a workflow, the deployer builds the bridge themselves; that bridge is the procurement deliverable that survives the consultation. What is not fixable in 91 days is the end-to-end consent process for a deployment that has not started consultation yet; the German six-to-nine-month consultation timeline does not compress without specific cause.
The German Federal Labour Court (Bundesarbeitsgericht) case law on §87(1) point 6 is the part US-headquartered vendors most consistently underestimate. The court has interpreted “technical equipment designed to monitor employee behaviour or performance” broadly enough to cover any system that captures, processes, or analyses employee work activity; the analytical capacity does not have to be the system’s primary purpose. An AI assistant whose secondary effect is to produce performance-relevant data falls in scope. A deployer whose vendor told them “this is just a productivity tool, not a monitoring system” has the statutory analysis backwards.
The works council can compel deactivation through the labour courts even after a deployment has gone live, with the cost of remediation falling on the deployer. The Dutch Ondernemingskamer of the Amsterdam Court of Appeal is the parallel forum for Article 27 disputes; the French Conseil de prud’hommes handles CSE consultation challenges. The litigation surface is real; the consultation surface is the way to avoid it.
What to ask your vendor in the next 30 days.
- Do you have a documented customer-success workflow for EU works council consent? In which jurisdictions specifically?
- What artefacts do you provide that a works council chair would accept as documentation of the AI system’s purpose, scope, and employee impact?
- Will you commit to a named EU customer-success contact who has worked through a BetrVG §87 consultation to completion?
- What is your turnaround SLA for revised system documentation when a works council requests changes?
The two governance-process claims still holding
Head of AI Governance role: AM-047
AM-047 holds that the Head of AI Governance role is now a named operating role in 60 percent of Fortune 100 enterprises per the Forrester 2026 Enterprise AI Predictions report, and is the strongest single predictor of an enterprise’s score on the readiness diagnostic Q10. The role’s effective shape converges on six accountabilities: cross-functional governance ownership, EU AI Act compliance posture, vendor procurement gate-keeping, deployment kill-criterion enforcement, audit-evidence substrate ownership, and internal upskilling. Reporting line is to the executive committee rather than to IT, security, or legal.
What the corpus did not predict precisely is the substantive variation in role authority. The role exists; the share of role-holders with written authority to block deployments before activation is materially lower than the role-existence number. In the enterprises where the Head of AI Governance has block authority, the operational-evidence work is moving on schedule. In the enterprises where the role sits in advisory capacity, the operational-evidence work is moving at the speed the deployment lead allows it to. That distinction is the next governance failure mode and the reason this piece predicts AM-047 stays Holding for the role-existence claim while a sub-claim on block authority would not.
The compensation question is downstream of the authority question and worth reading in that order. The published 2026 ranges (Director $250-450K base, VP $400-700K base, C-level $600K-$1.2M total comp with significant equity at growth-stage and tech enterprises) are credible only at the upper end of the band when the role carries written deployment-block authority. Roles paid at the lower end of the band are advisory roles by another name. The board-side test is whether the role’s calendar in Q3 2026 includes regulator-engagement entries and procurement-block decisions, or whether it consists primarily of internal coordination meetings. The first is the hire that earned the title; the second is the title without the function.
Centralised versus federated governance: AM-051
AM-051 holds that the dominant 2026 Fortune 500 pattern is hybrid (central function owns regulatory and procurement; business units own deployment operations and ROI accountability), because purely centralised models do not scale past 50 to 100 deployments and purely federated models cannot satisfy EU AI Act Article 9 risk-management documentation consistency. The threshold at which the hybrid model becomes structurally superior is approximately 30 production deployments or operating in two or more EU AI Act high-risk Annex III categories.
The pattern is set. The 2026 enforcement window is what will test whether the hybrid model produces consistent Article 9 risk-management documentation across business units; the early read is that the central function has carried the documentation work, with business-unit involvement uneven. The Q3 review will surface whether documentation consistency holds under regulator scrutiny. For now, the organisational design claim is Holding.
The Article 9 documentation-consistency test is specific. Article 9(2) requires the risk-management system to be a continuous iterative process, with risks identified, evaluated, mitigated, and tested across the deployment’s lifecycle. A regulator reading the documentation expects to see the same structural artefact for every Annex III deployment in the enterprise: the same risk taxonomy, the same evaluation methodology, the same mitigation hierarchy, the same testing record schema. Federated documentation that varies in shape from business unit to business unit fails the consistency test even when each individual document is well-prepared. The hybrid model’s premise is that the central function owns the schema and the business units fill it in; the operational reality is that the schema-versus-content split degrades when the central function does not have time to review every business unit’s submission. That is the surface where the model breaks first.
Two risk-pricing signals from outside the corpus
The financial system is pricing the deadline in even where enterprises are not.
Reinsurance tightening: AM-119
AM-119 holds that the 2026 cyber-insurance renewal tightening enterprises are experiencing is upstream-driven by reinsurance market repricing of catastrophic AI tail risk (Lloyd’s of London, Munich Re, Swiss Re), not by primary-carrier loss data. The reinsurance signal travels via tighter treaty terms, AI-specific exclusions, and elevated retentions, with a 6 to 12 month lag to primary policies.
The substantive nuance worth surfacing: the dominant pricing signal in the published reinsurance research is aggregation risk, not single-deployment cascading failure. Aggregation risk is the scenario where one prompt-injection vector or one model-update regression hits many insureds simultaneously. That changes which policy language tightens first. The cyber wordings that tighten are the “widespread event” definitions, the systemic-event exclusions, and the silent-cyber clarifications, rather than the professional-liability exclusions in tech-E&O. Enterprise risk officers reading the renewal proposal should expect language changes in the cyber wording specifically, with the E&O wording lagging by one renewal cycle.
The named published research for buyer-side counsel to read in advance of renewal is Lloyd’s Futureset systemic-risk programme for the catastrophic-scenario framing, the Munich Re Cyber Insurance Risk Report (annual, 2024 through 2026 editions) for the loss-modelling framework that primary carriers actually reference, and the Swiss Re Institute sigma research (2025 publications on AI-related liability and cyber-physical convergence) for the underwriter-assumption layer. The treaty terms themselves are private; the published research is the public proxy for what those treaty terms now contain.
Agentic-AI insurance gap: AM-107
AM-107 holds that the 2026 insurance market does not yet offer agent-specific E&O policies in any mature form. Existing cyber and tech-E&O policies were drafted against human-error and software-defect risk models that do not cleanly map to autonomous reasoning actors. The 2 August 2026 deadline does not fix the gap; it makes the gap more visible. Chief risk officers who flagged this in the 2026 underwriting plan are ahead of the market. Most did not.
The corpus’s D&O insurance and AI-supervision claim piece covers the parallel surface for board-level liability. The 2 August enforcement opens the regulatory underpinning for derivative-action exposure that was always implicit; the D&O policy language buyers signed in 2024 increasingly does not address it. The reading from the Claude Mythos piece on what “too dangerous to release” means for the enterprise risk register is the same shape: the financial-system pricing of AI risk is moving faster than the enterprise governance documentation that would let an underwriter price specific deployments accurately.
The structural issue for the chief risk officer is that the agent-specific E&O coverage gap interacts with the works council surface from AM-120 in a specific way. The E&O policy responds to claims of professional negligence; the works council litigation surface is a labour-relations claim that often falls outside the E&O response trigger. The tech-E&O policy responds to claims of software defect; the AI agent’s reasoning step is not cleanly a defect under the legacy wording. Enterprises that ship into Annex III categories now hold three separate uncovered exposures (the works council labour claim, the agent reasoning-step E&O claim, the Article 99 regulatory penalty) that the 2026 policy substrate does not address as a single coherent surface. Naming the gap is what the underwriting plan does; closing the gap requires the market to ship the product and the deployer to commit to the premium.
The 90-day checklist
This is the section to forward, print, and use. Each role gets four to six items. Each item names the action, the owner, the deliverable, the primary-source reference, and the test.
For the CIO
- Run the AI mapping exercise. Inventory every AI system in EU jurisdiction. Output: a spreadsheet with system name, vendor, internal owner, Annex III classification status, and Article 6(3) derogation analysis. Test: can you hand this to a regulator on 5 August and answer their first three questions from the spreadsheet alone?
- Identify the deployment subset most exposed: anything touching credit scoring, hiring, biometrics, education, essential services, critical infrastructure. These are the priority queue. Test: does the inventory in item 1 sort by Annex III category?
- Get the Head of AI Governance written authority to block deployments. Document the authority in the same memo that names the role. If you cannot get it before 2 August, name the gap explicitly in the risk register with a remediation date.
- Schedule a 60-minute review with the four operational-evidence owners (data residency, audit-evidence, AI-BOM, works council). One agenda item per owner: what is the smallest credible thing you can ship by 1 August?
- Decide which deployments are pausing rather than rushing. Rushed compliance is more expensive than paused deployment, both at the audit and at the renewal.
For the Head of AI Governance
- Pull the Article 9 risk-management documentation gap. For each Annex III deployment, list (a) what risk register entry exists, (b) what evidence supports it, (c) what regulator-queryable artefacts back it up. Time-box the gap-pull to one working week.
- Run the AM-046 fourteen-field template against three live deployments. Time the evidence assembly. If it takes more than four hours per deployment, you have a tooling problem, not a process problem, and the remediation is different.
- Document the works council consent workflow for every EU jurisdiction the enterprise operates in. Where vendors do not provide one, build the bridge yourself. Test: can a German works council chair read your documentation and recognise it as a credible BetrVG §87 submission?
- Update the Q3 board-level AI report to distinguish governance-process maturity from operational-evidence maturity. Most boards conflate the two; the conflation is what produces overconfident board minutes that the next derivative action will quote.
- Confirm the AI literacy obligation under Article 4 is met for every staff member operating an AI system. The obligation has been in force since 2 February 2025. If your enterprise has not done it, August is not your most exposed obligation.
- Surface the AI Bill of Materials gap to procurement with the CycloneDX ML-BOM 1.5 specification attached. Brief them on what the artefact contains and why generic SaaS RFP language does not produce it.
For the General Counsel
- Read Article 99 of the AI Act in full. Brief the executive committee on the penalty structure: €35 million or 7 percent for prohibited practices; €15 million or 3 percent for high-risk non-compliance; €7.5 million or 1 percent for false or misleading information to authorities. The first Article 99 enforcement action is likely Q3 2026; the board will ask about it.
- Identify the D&O insurance question: does the policy explicitly carve out AI-supervision claims? If you do not know, ask the broker before renewal.
- Map every AI deployment to the correct Article 113 sub-paragraph. Some of the deployments wait until 2 August 2027 (Annex II embedded products). Knowing which is the difference between rushed bad work and prioritised good work.
- Update the standard procurement contract template to include the AM-052 exit-clause provisions: audit-log export, trained-state extraction, data-residency continuity, liability tail. Existing SaaS templates do not cover these.
- Brief the board chair on the Caremark-style derivative-action exposure named in AM-116. Most boards have not read this in their actual policy language.
For the CISO
- Map the OWASP Agentic AI Top 10 to your Article 9 risk-management documentation. Article 9 requires demonstrated security controls; the OWASP framework is the closest thing to a recognisable security taxonomy a regulator will accept.
- Run the EchoLeak cross-agent prompt-injection scenario against any deployment that ingests untrusted content and has tool surfaces capable of exfiltration. The scenario is an architectural class, not a Copilot-specific bug.
- Document the CISO sign-off path for action-approval gates. Article 14 (human oversight) requires meaningful intervention authority. Segregation-of-duties controls built for human approvers do not satisfy the Article 14 standard.
- Co-sign the works council consent workflow with the Head of AI Governance. The CISO has a stake here that is often missed: works council consent commonly includes monitoring-system specifics that are CISO-owned.
- Pull the kill-switch protocol documentation. If you do not have one written, write it before 2 August. The protocol names the trigger conditions, the named decision-maker, and the named recovery procedure.
For the VP of Procurement
- Run the AM-117 ML-BOM check against every enterprise-AI vendor in your contracted estate. Specific question to ask each vendor: do you ship a CycloneDX ML-BOM 1.5 artefact with your enterprise deployment, yes or no, and if yes, when?
- Run the AM-120 works-council workflow check against every vendor: do you have a documented customer-success workflow for EU works council consent, and which jurisdictions are covered?
- Update RFP templates to include the 60 governance-mapped questions from AM-026. Existing SaaS RFPs miss six dimensions that the August enforcement window now requires.
- Identify which vendor renewals fall between 2 August 2026 and 31 December 2026. These are the renewals where the price-to-power balance shifts toward the buyer; the regulator-driven leverage will not last past Q1 2027 as the market re-prices.
- Tag every active vendor contract with its primary EU-AI-Act risk category (data residency, audit-evidence, AI-BOM, works council, or none). The tagging is the input the General Counsel needs for the Article 113 sub-paragraph mapping in their checklist; without it, the GC cannot answer which contracts wait until 2027 and which do not.
What we will be tracking after 2 August
Three claim updates will publish in the first two weeks of August 2026.
The first is the enforcement-action count. How many investigations did EU national competent authorities open in the first 14 days? The number sets the operational baseline for what enforcement intensity actually looks like. The early signal will tell enterprises whether the regulatory posture is risk-based and selective or broad and survey-style.
The second is the first Article 99 fine. Which enterprise, which obligation, which size? The first fine sets the precedent the next quarterly board minutes will reference. Whether it lands at the prohibited-practice €35 million tier or the high-risk €15 million tier signals which obligation category enforcement will prioritise in 2026.
The third is the first works council case to reach an EU labour court over AI deployment. The case will name a vendor and a deployer; the public docket will be the first concrete evidence of the BetrVG §87 pattern at the litigation level rather than at the consultation level.
These are forward-looking ledger commitments. The publication will report what the corpus’s predictions did against reality. The pattern will continue on a quarterly cadence, picking one slice of the corpus to trace through the ledger; the next slice in this cadence is the economics cluster (loaded FTE versus total agent operational cost, retraining gap, total cost of ownership reality), with the security and insurance and D&O cluster following as the regulatory window closes.
The methodology for the Q3 measurement is fixed in advance to keep the result honest. Each of the eleven cited claims gets its review-date timer reset on publish today; each is re-tested on 1 October 2026 against the same primary-source surface that supports it now. The verdict change for each claim is logged to the corrections page on the same day; the aggregate verdict-change rate across the eleven goes into the Q3 review bulletin. The new claim AM-127 is itself auditable on the same surface: either at least three of the four operational-evidence claims downgrade by 1 October, or AM-127 moves to Not holding. The methodological commitment is that the publication will not adjust the threshold to make the claim hold.
This piece is the next in a recurring quarterly enterprise-AI series tracing the corpus through the Holding-up ledger. The flagship Q2 2026 review bulletin and the upcoming Q3 2026 review (late July) remain the structured corpus-wide audit. Editorial standards are at /standards/. The signature decision is at /how-its-written/.
Spotted an error? See corrections policy →
Reasoned disagreement is a first-class signal here. Every review cycle weighs documented dissent; material dissent becomes part of the article's change history. This is not a corrections form — use /corrections/ for factual errors.
Agentic AI governance →
Governance frameworks, oversight patterns, and compliance postures for enterprise agentic-AI deployment. 44 other pieces in this pillar.