Skip to content
Method: every claim tracked, reviewed every 30–90 days, marked Holding, Partial, or Not holding. Drafted by Claude; signed off by Peter. How this works →
AM-105pub27 Apr 2026rev27 Apr 2026read13 mininRisk & Governance

Offensive security and the clockspeed gap: why CIOs cannot defend AI-era threats with defensive-only postures

AI did not just give attackers new tools. It gave them a faster OODA cycle. The senior IT leader running a defensive-only posture in 2026 is running at human clockspeed against attackers running at agent clockspeed. The gap is the risk.

Holding·reviewed27 Apr 2026·next+89d

The previous piece on this site argued that Anthropic’s withholding of Claude Mythos is a posture-changing event for senior IT. The argument was about capability. The deeper argument is about clockspeed.

AI did not just give attackers new tools. It gave them a faster OODA cycle.

John Boyd’s OODA loop (observe, orient, decide, act) was the framework that taught fighter pilots to win not by being more skilled but by completing decision cycles faster than their opponents. Charles Fine’s Clockspeed reframed the same idea for industry: every sector has an inherent rate of change, and competitive advantage compounds in the hands of organizations that match or exceed their sector’s clockspeed. Cyber security has always been an OODA-loop discipline. The asymmetry of 2026 is that on one side of the loop the cycle just compressed by an order of magnitude, and on the other side it did not.

This piece is for the CIO and CISO trying to read what that does to their operating model. The short version: a defensive-only posture cannot win an OODA race against an attacker running at agent cadence. Closing the gap is not optional, and it is not something a faster patch program achieves. It requires an offensive-security operating mode that exists specifically to compress defender clockspeed.

The clockspeed asymmetry

Until late 2025, the attacker OODA cycle for a competent threat actor was measured in days-to-weeks. Discovery of a target’s vulnerable surface took reconnaissance time. Vulnerability research took human investigation time. Exploit development took human engineering time. Weaponization took testing time. Even the Lapsus$ playbook, which compressed many of these steps through social engineering, still ran on human-tempo decision cycles.

The Mythos disclosure is the first widely visible piece of evidence that the attacker OODA cycle has compressed by an order of magnitude. Anthropic’s own statement on the FreeBSD NFS exploit catalogued as CVE-2026-4747 reads: “no human was involved in either the discovery or exploitation of this vulnerability after the initial request to find the bug.” The browser exploit Anthropic disclosed chained four vulnerabilities, wrote a JIT heap spray, and escaped both renderer and operating-system sandboxes autonomously. The UK AI Security Institute reported a 73% success rate on expert-level hacking tasks, against a baseline where no prior public model could complete such tasks at all in April 2025. That is a year-on-year compression of attacker clockspeed.

The defender OODA cycle has not compressed. It is still governed by:

  • Patch cycles (typically 30–90 days for non-critical, 7–30 days for critical, with CISA’s Known Exploited Vulnerabilities catalogue setting a 21-day federal mandate that most enterprises treat as aspirational).
  • Change-management boards (weekly to monthly cadence in most large enterprises, with emergency-change protocols that themselves run on hours-to-days).
  • Detection-engineering iteration (the loop from new threat intelligence to deployed detection rule typically runs 1–4 weeks in mature SOCs, longer where the engineering function is reactive rather than proactive).
  • Vendor security advisories (vendors disclose on schedules optimized for coordinated disclosure, not for defender response. Typical lead time from internal patch availability to public advisory is 60 to 90 days).
  • Pentest cadence (annual or quarterly for most enterprises, with continuous testing still rare outside the largest organizations).

This is the gap. On one side, an OODA cycle compressing toward minutes. On the other, an OODA cycle still operating in weeks-to-quarters. The 2026 cyber-risk variable that matters most is no longer “what is your attack surface” or “what is your patch hygiene.” It is what is your defender clockspeed, and how does it compare to your attacker’s.

Why defensive-only postures fail at AI cadence

A defensive-only posture is a posture optimized to react to attacks. It assumes attackers will be detected, that the detection will trigger response, and that the response will arrive before consequential harm. Three assumption layers underneath that posture have just become weaker.

The CVSS-prioritization queue is a clockspeed-loser. Standard vulnerability management ranks patches by CVSS score, often weighted by exploitability metadata from KEV. The implicit logic is: we patch in priority order, and we accept the residual-risk window because attackers also operate on a discovery-and-weaponization timeline. Mythos-class capability collapses that timeline. A vulnerability disclosed on Tuesday is no longer two weeks from being weaponized; it is potentially hours. A queue that orders work by next-quarter risk is solving last-decade’s problem.

The change-management board is a clockspeed-loser. The board exists to reduce operational risk by adding human review to changes. The trade is more time for fewer outages. That trade was rational when both attackers and defenders ran at human cadence. When the attacker runs at agent cadence, every CAB cycle that adds a week to a security patch deployment is a week of free dwell time gifted to whichever attacker is moving fastest. A CAB process that does not have an explicit AI-era exception path is now actively producing risk.

The “wait for confirmed exploit in the wild” detection rule is a clockspeed-loser. Many SOCs prioritize detection-engineering investment toward TTPs with confirmed observed exploitation. The logic is sound when exploit observation lags exploit availability by weeks or months. With autonomous offensive capability, the lag compresses. By the time exploitation is confirmed in the wild, a substantial fraction of vulnerable estate is already compromised, and the defender is engineering detection for an attack that has already happened. This is the operational version of the Mandiant M-Trends finding that median dwell time for externally notified intrusions is 26 days. That number has to come down, and it will not come down while detection engineering operates on incident-triggered logic.

The point is not that defensive controls become useless. They do not. Patching, change management, and detection engineering remain table stakes. The point is that they are now insufficient. A posture composed only of defensive controls operates at human clockspeed by construction. Closing the gap requires controls that compress defender clockspeed directly.

Offensive security as clockspeed compression

There are five operational shifts a CIO or CISO can make in 2026 that compress defender clockspeed without crossing into hack-back territory or any other reading of “offensive” that would put the organization on the wrong side of computer-misuse legislation.

One: continuous attack-surface validation through breach-and-attack simulation. Breach-and-attack simulation (BAS) tools such as those from SafeBreach, Picus, AttackIQ, and Pentera execute known TTPs against a production-similar environment continuously. The output is daily evidence of which controls fired and which did not. The clockspeed effect is to replace the quarterly pentest cycle with a daily validation cycle. A vulnerability that lands in a tested control category gets caught the same day instead of the next quarter. The cost of standing this up against the top 10 critical assets in an enterprise is meaningfully lower than most CIOs assume. Pilot scope can run on a single low-six-figure budget line, and the operational return is visible inside the first quarter.

Two: AI-augmented internal vulnerability discovery. This is the controversial one and the most clockspeed-significant of the five. The argument is straightforward: if Mythos-class capability will reach attackers on a timeline measured in quarters rather than years, the defender who claims that capability first for their own estate compresses their discovery clockspeed to match. Project Glasswing is one route in for the very largest enterprises. For everyone else, the Q2 2026 reality is that frontier-lab partner programs, security-research arms of major cloud providers, and a small number of specialist firms have begun offering AI-augmented vulnerability discovery as a service against client codebases. The economics are still uncomfortable. The cost of having Mythos-class capability point at your own estate is non-trivial, and the disclosure ethics need careful handling. The alternative is letting an attacker get there first.

Three: threat hunting as a standing function. The traditional threat-hunting team is a part-time activity layered on top of the SOC, activated when incident response surfaces an anomaly worth investigating. Mature programs make it a standing function: a small team whose entire mandate is forming hypotheses about adversary behaviour, designing tests, and running them against the environment continuously. The clockspeed effect is to collapse the gap between new threat intelligence is available and detection logic is deployed in production. SANS and the MITRE ATT&CK Defender programme both publish reference programs that work at small-team scale. The conversion cost from incident-triggered to standing is a single headcount reallocation.

Four: deception at scale. Deception inverts the clockspeed asymmetry. An attacker that has compressed reconnaissance to minutes is forced to spend a non-trivial fraction of that time validating which findings are real and which are decoys. Thinkst Canary, Tracebit, and the broader deception-grid category place high-fidelity decoys throughout production environments. A canary file accessed, a canary credential used, a canary hostname resolved: each is a signal that fires within seconds, against an attacker who has just spent reconnaissance time on it. MITRE Engage, the threat-informed-defense programme that pairs with ATT&CK, is the right reference for thinking about deception as a strategic capability rather than as a single-tool deployment. Deception is also the offensive-security shift with the most asymmetric cost-to-effect ratio: a hundred canaries deployed across an enterprise estate cost less than one mid-sized security-tool subscription, and the operational signal-to-noise of canary alerts is unusually high.

Five: counter-AI controls and agent-cadence monitoring. This is the second reading of “offensive” Peter approved as in-scope: using AI defensively at attacker-grade capability. The simplest pattern is prompt-injection canaries, where decoy instructions seeded into an agent’s context window detect prompt-injection attempts the way a deception canary detects reconnaissance. The harder pattern is agent-activity monitoring at agent cadence: where a SOC playbook would historically alert on “credential used at unusual hour,” the AI-era playbook needs to alert on “agent issued five tool-calls in three seconds against an unfamiliar API.” This is the operational instantiation of the MTTD-for-Agents framework developed on this site. Counter-AI controls are the youngest of the five categories and the only one where vendor maturity is still genuinely thin. The right CIO posture is to treat them as a 2026–2027 build investment, not a 2026 procurement.

These five shifts are not independent. They tie together through a single threat-informed-defense baseline.

The MITRE ATT&CK baseline that ties it together

MITRE ATT&CK is the publicly available enumeration of adversary tactics, techniques, and procedures, maintained continuously and updated against current threat-actor behaviour. MITRE Engage is its deception-and-denial counterpart. Together they constitute the threat-informed-defense baseline that the five operational shifts above all reference.

The reason this matters for a clockspeed argument: a defender operating without a threat-informed baseline is a defender whose OODA cycle starts from first principles every time a new TTP appears. A defender operating with a threat-informed baseline starts each cycle with a partial answer already on the shelf. The compression is meaningful. MITRE’s ATT&CK Evaluations programme provides the public evidence of which security tools detect which TTPs, which is the input that allows a CIO to choose a tool stack against measured detection coverage rather than vendor marketing material.

CIOs who do not yet have ATT&CK coverage mapped against their existing detection stack are missing the baseline observability that makes the five offensive-security shifts above measurable rather than aspirational. The mapping exercise is a one-time investment, executable in 60–90 days, that pays dividends across every subsequent posture decision.

Where this stops

This piece argues for offensive-security operating mode in two specific senses: using offensive techniques internally for defensive purpose, and using AI defensively at attacker-grade capability. It does not argue for hack-back, for counter-offensive operations against threat actors, or for any activity that places organizational personnel in the position of attacking systems they do not own.

The reasoning is not squeamishness. It is operational. The US Department of Justice treats unauthorized computer access under the Computer Fraud and Abuse Act regardless of attacker provocation. The UK Computer Misuse Act 1990 is similarly drawn. The EU NIS2 directive and individual member-state implementations create additional liability surfaces. A CIO who authorizes counter-offensive activity is creating personal and corporate criminal exposure that no clockspeed argument can offset, and that no breach disclosure framework can absorb.

The good news is that the clockspeed gap can be closed without crossing that line. The five operational shifts above all live entirely inside the organization’s own perimeter, against the organization’s own assets. The offensive technique is what compresses defender clockspeed. The offensive target remains the defender’s own estate.

What CIOs and CISOs do this quarter

Five actions, no new budget, executable inside Q2 2026.

Measure your current dwell-time baseline this month. The Mandiant M-Trends 2025 report and the Verizon Data Breach Investigations Report both publish industry dwell-time figures in formats your SOC can replicate against internal incident data. Producing this number is a one-week analyst exercise. Without it, every subsequent posture conversation runs without a yardstick.

Stand up a 90-day BAS pilot against your top 10 critical assets. Pilot scope is the single most-underused tool in security procurement. A 90-day pilot with one BAS vendor against the assets that matter most produces decision-grade evidence about whether the category is worth full deployment. The output is a continuous, daily, evidence-based answer to which controls actually fired against this week’s TTPs. That answer is the input that lets the CAB process distinguish AI-era exception cases from routine change requests.

Run an AI-augmented internal vulnerability discovery exercise against one critical codebase. This is the highest-cost recommendation in the list and the one with the highest clockspeed return. The right scope is a single codebase whose compromise would be material. The right cost is the one-time engagement fee for a vendor or partner program offering this capability. The right output is a short list of vulnerabilities your defender side now knows about before any attacker side does.

Convert one detection-engineering hire from incident-triggered to standing threat-hunting. The cost of this shift is zero new headcount. The structural effect is a security organization with a forward-looking detection function rather than a backward-looking one. The organizational signal is also significant: a CIO who makes this conversion is communicating that defender clockspeed has become a first-class concern.

Deploy 100 deception canaries within 30 days. This is the cheapest, fastest, highest-asymmetry move on the list. Canaries cost a fraction of standard endpoint tooling, deploy in days rather than quarters, and produce the highest signal-to-noise alerts in most SOCs. The single decision is “do it”; the implementation is days of platform-team work.

None of these requires a new vendor relationship the organization does not already have. None requires a new headcount. None requires a budget cycle. All five compress defender clockspeed against the AI-era attacker OODA cycle that AM-104 documented.

What we are tracking

Claim AM-105 is logged with a 90-day review on 26 July 2026. The trackable assertion: organizations that have not adopted an offensive-security operating mode (continuous attack-surface validation, AI-augmented internal vulnerability discovery, standing threat-hunting, deception, counter-AI controls) by Q4 2026 will show measurably wider mean-time-to-detect for AI-assisted attackers than peers that have, in industry-survey data published in late 2026 and early 2027.

Three review checks at 90 days. Has the Mandiant M-Trends 2026 mid-year, the Ponemon Cost of a Data Breach 2026, or the Verizon DBIR 2026 introduced a clockspeed-asymmetry or AI-cadence-detection variable into their published methodology? Have analyst frameworks (Gartner CTEM, Forrester Zero Trust, IDC) explicitly cited offensive-security operating mode as a defensible 2026 posture? Have published incidents named an AI-cadence intrusion that defeated a human-cadence detection program?

If none of the three has moved by 26 July 2026, the claim is Partial. Clockspeed is a useful frame but the industry has not yet converged on it. If one or two have moved, the claim Holds as written. If all three have moved, the claim is Strengthened, and the next review will widen scope to the operational evidence: which industries have moved their median dwell time, which have not, and what the divergence pattern looks like.

The point of writing this on a 90-day clock is the same as the point of writing AM-104 on a 60-day clock. The headlines are not the answer. The published evidence in the next survey cycle is.

The claim is on the ledger. It will be reviewed in public, and if it does not hold, the correction will be on the same page.

ShareX / TwitterLinkedInEmail

Spotted an error? See corrections policy →

Disagree with this piece?

Reasoned disagreement is a first-class signal here. Every review cycle weighs documented dissent; material dissent becomes part of the article's change history. This is not a corrections form — use /corrections/ for factual errors.

Part of the pillar

Agentic AI governance

Governance frameworks, oversight patterns, and compliance postures for enterprise agentic-AI deployment. 33 other pieces in this pillar.

Related reading

Vigil · 53 reviewed