Cyber Underwriting

Same Attacks, Bigger Consequences: How AI Is Reshaping Cyber Risk for Insurers

Underwriting implications for Group CEO, CUO, Cyber Product Leaders and CISOs

April 20268 min readData-led · Executive briefing

00 — Executive Summary

Four things that demand immediate attention

$4.88MAvg. global breach cost — up 10% YoY (IBM 2024)
+34%Rise in vulnerability exploitation YoY (Verizon 2025)
30%Of breaches involve a third party — doubled in one year
+25%Ransomware attacks rose in 2024 (Munich Re)

Cyber attacks have not changed in nature. Stolen credentials, unpatched vulnerabilities, and human error have dominated loss data for a decade and continue to do so. What has changed is the economics of attack: AI and automation have reduced the cost and skill required to launch attacks at scale, increased their frequency, and amplified the systemic reach of individual events. For insurers, frequency assumptions built on historical baselines are deteriorating. Severity is harder to model as large losses concentrate across shared infrastructure. And aggregation risk is growing in ways that traditional catastrophe frameworks were not designed to handle.

The market needs dynamic pricing inputs, real-time telemetry, and tighter accumulation controls — the argument for each is set out below.

01 — Introduction

The question the market has been circling

Does the arrival of frontier AI models with genuine autonomous offensive cyber capability represent a step change in the risk landscape — and if so, what does that require of underwriting, pricing and accumulation management?

That question has been building for several years, but it arrived on the boardroom agenda with unusual force in April 2026. The release of a frontier AI model with independently verified autonomous attack capability — and the emergency discussions it triggered among central bankers, treasury officials and bank CEOs on both sides of the Atlantic — moved a debate that many insurers had treated as medium-term planning into the present tense. Detail on those specific events is covered in the case study callout within this note. What they illustrate, rather than constitute, is the subject of this analysis.

The more durable question is structural. Frontier AI capabilities will keep advancing and the specific systems in any given news cycle will be superseded. But the underlying dynamic — the industrialisation of cyber attack economics through AI and automation — is already producing measurable effects on loss frequency, severity and correlation. Those effects are interacting with underwriting assumptions, pricing models and accumulation controls calibrated for a materially different threat environment. The evidence suggests that interaction is already producing losses that exceed the models.

This note is grounded in the most recent available data from Verizon, IBM, Munich Re and Lloyd's, and draws on regulatory assessments from the UK, EU and US.

02 — Reality Check

No new playbook: attacks remain stubbornly familiar

The fundamental mechanics of a cyber breach have not changed meaningfully in over a decade. Attackers still rely on three primary entry points: stolen credentials, unpatched vulnerabilities, and mistakes made by people inside organisations. The 2025 Verizon DBIR — covering over 20,000 incidents — found that credential abuse initiated 22% of breaches, vulnerability exploitation approximately 20%, and the human element remained a factor in 60% of all breaches. Over a ten-year horizon, stolen credentials have appeared in nearly a third of all data breaches.

Primary initial access vectors — share of breaches (Verizon DBIR 2025)

Misconfig / human error
30%
Credential abuse
22%
Vulnerability exploitation
20%
Phishing
16%
Other vectors
12%

Source: Verizon Data Breach Investigations Report 2025 (20,000+ incidents analysed)

Only 15% of perimeter-device vulnerabilities were fully remediated within the reporting period; nearly half remained unresolved. Exploitation of vulnerabilities nearly tripled year-on-year in 2024, driven by zero-day attacks on unpatched systems. The risk being priced today is fundamentally the same risk that existed in 2016. What has changed is not the attack type but its velocity, scale, and the degree to which automation is removing the skill requirement for execution.

Underwriting implication: If individual policyholder risk profiles have not changed materially but aggregate losses are rising, the explanation lies in frequency and correlation — not in new attack categories requiring new coverage language.

03 — The Acceleration Effect

AI changes the economics, not the anatomy

The significant shift underway is economic. AI and automation have materially reduced the cost and skill required to launch cyber attacks at scale. Ransomware-as-a-Service platforms offer subscription-based toolkits — including AI-enabled hacking tools — that have lowered the barrier to entry for less sophisticated actors. Munich Re identifies this as a primary driver of attack frequency, speed and precision across its 2025 and 2026 cyber trend analyses.

The scale of this industrialisation is visible in the data. The number of victims publicly named on ransomware leak sites has grown from 1,412 in 2020 to over 6,000 in 2025 — a fivefold increase in five years. In 2024 alone, ransomware attacks rose approximately 25% year-on-year, and the volume of data exfiltrated in ransomware incidents nearly doubled. There are now more than 75 active groups posting across public leak infrastructure — a market with the characteristics of a mature criminal industry.

Ransomware victims publicly named on leak sites, 2020–2025

1.4k
2020
2.1k
2021
2.9k
2022
3.8k
2023
5.0k
2024
6.0k
2025

Sources: Munich Re Cyber Risks & Trends 2025; Beinsure US Cyber Insurance Premium Report 2025

Shorter exploit cycles compound this further. The time between vulnerability disclosure and first active exploitation in the wild has collapsed — in many cases to under 24 hours. Attackers are consistently moving faster than defenders can patch, and the automation of vulnerability discovery itself is an emerging pressure that current underwriting assumptions do not yet reflect.

Underwriting implication: Frequency assumptions based on pre-2023 loss data are structurally too low. Models need to account for exponential growth in attack volume, not linear trend extensions from earlier portfolios.

04 — Systemic Risk

From operational risk to correlated catastrophe

The most consequential structural shift is not frequency — it is correlation. Individual breach events are becoming less important than systemic events that propagate across shared infrastructure. Third-party involvement in breaches doubled from 15% to 30% in a single year (Verizon DBIR 2025). The 2024 CrowdStrike software update failure — a faulty update, not a malicious attack — produced simultaneous outages across airlines, banks, hospitals and stock exchanges globally, illustrating how a single shared dependency can function as a de facto catastrophe trigger.

Then — Pre-2023 underwriting model
Risk unitIndividual policyholders
CorrelationLow — mostly independent events
Data inputsAnnual questionnaire
PricingStatic, renewal-driven
CAT scenarioLimited accumulation modelling
Now — What the data demands
Risk unitPortfolio + shared infrastructure
CorrelationHigh — cloud, SaaS, supply chain
Data inputsContinuous telemetry + signals
PricingDynamic, exposure-adjusted
CAT scenarioCloud/supply chain event modelling

A joint CyberCube/Munich Re study (July 2025) found that a severe malware event could infect up to a quarter of all systems worldwide. The expected global cost of software supply chain attacks is estimated to grow from $46 billion in 2023 to $60 billion in 2025 (Juniper Research). The analogy to catastrophe risk holds precisely: just as a single hurricane produces simultaneous property claims across one geography, a single compromised software dependency can produce simultaneous business interruption claims across thousands of policyholders with no other common characteristic.

Underwriting implication: Cyber risk is developing the aggregation characteristics of nat-cat. Accumulation exposure must be tracked at the vendor, cloud provider and software dependency level — not solely at the named insured level.

05 — Where Models Break

Four structural failures in current underwriting practice

The combination of rising frequency, amplified severity and growing correlation creates specific structural failures in how cyber risk is priced and managed. These are evidenced in claims experience and in the supervisory priorities of the FCA, EBA and SEC — all of which have flagged AI-enabled cyber as a priority issue for 2026.

← Low systemic correlationHigh systemic correlation →
Low Severity / Low CorrelationFrequency underestimationAttack volumes rising 25–30% YoY. Historical frequency tables understate exposure across most risk classes.
Low Severity / High CorrelationAggregation blind spotsSmall claims correlate through shared SaaS and cloud providers. Portfolio accumulation not captured by individual risk scoring.
High Severity / Low CorrelationSeverity mispricingAvg. breach cost $4.88M globally (IBM 2024); healthcare $9.77M. Credential-based breaches average 292 days to identify and contain.
High Severity / High CorrelationCAT zone — attribution riskNation-states blend with criminal groups. War exclusions create coverage uncertainty where legal attribution is contested.

Framework: Eudaimon Consulting, April 2026

Attribution risk is the most legally complex failure. Nation-states are deploying criminal-style ransomware tooling, and criminal actors are being used as proxies, blurring the line between geopolitical acts and commercial cybercrime. Standard war exclusion clauses were not drafted for this hybrid environment. The legal and claims exposure for insurers writing affected policyholders remains genuinely uncertain.

Case study — AI capability trajectoryWhat frontier models reveal about the direction of travel

One way to understand where AI-enabled attack automation is heading is to look at where frontier research capability currently sits — and how quickly that frontier has moved. In April 2026, independent evaluation of Anthropic's most advanced model provided concrete, verified data points on what leading-edge AI can do in a controlled offensive cybersecurity context. The findings are instructive not as a threat assessment of any specific model, but as a marker of a trajectory.

181×More working exploits vs prior generation on identical vulnerability tests — same class of vulnerability, automated at a qualitatively different scale
73%Success rate on expert-level capture-the-flag tasks where no prior model had scored above zero — a step change, not an incremental gain
32 stepsLength of simulated corporate network attack completed end-to-end autonomously for the first time, covering reconnaissance, lateral movement and exfiltration
12–24 moHistorical lag between frontier research capability and availability in commercial tooling accessible to well-resourced threat actors

The practical implication for underwriters is in that final figure. Capabilities that today exist only in restricted research environments have historically become available in commercial attack tooling within one to two years. The underwriting models and pricing assumptions being written today will govern portfolios that sit squarely within that window. Frequency underestimation and severity mispricing are the failure modes most directly exposed to this trajectory. Attribution risk is compounded by the same dynamic: as AI lowers the operational cost of attack, the number of actors capable of conducting sophisticated, multi-stage campaigns grows — making the criminal-versus-state distinction harder to sustain in claims contexts.

Sources: UK AI Security Institute, evaluation of frontier AI model cyber capabilities (April 2026); Anthropic system card disclosure (April 2026)

06 — Strategic Implications

Four decisions for insurance leadership

Pricing models

Move from static annual pricing to models that incorporate real-time external signals — attack surface scans, threat intelligence feeds, vulnerability disclosure cadence. Static renewal pricing cannot keep pace with an environment where exploit cycles have collapsed to hours and AI is beginning to automate discovery of previously unknown vulnerabilities.

Data requirements

Annual questionnaires are inadequate for a threat environment where attack automation is advancing rapidly. Continuous telemetry, independent security ratings, and external attack surface monitoring should be standard inputs for mid-market and above. The SEC's 2026 examination priorities have elevated cybersecurity and AI above crypto as the dominant supervisory risk topic.

Portfolio management

Accumulation controls must track concentration by cloud provider, software vendor, managed service provider and critical infrastructure sector — not only by industry vertical or geography. The CrowdStrike 2024 event demonstrated that a single vendor update can simultaneously affect insurers, airlines, banks and hospitals. That is a cat event without a named peril.

Product design

Policy language requires review across four pressure points: war and state-actor exclusion clarity; aggregate sub-limits for systemic events; supply chain coverage triggers; and silent cyber exposure in non-cyber lines. DORA has been in force since January 2025. The EU Cyber Resilience Act applies from 2027.

07 — Conclusion

The new cyber equation

Cyber risk has not fundamentally changed in nature. The attacks dominating loss data today use the same methods that dominated it a decade ago: stolen credentials, unpatched systems, human error. That is, in one sense, reassuring — sound cyber hygiene remains the most effective risk management tool available to policyholders, and underwriters can still assess risk quality through controls-based frameworks.

What has changed, materially and with increasing speed, is scale, economics and correlation. AI has industrialised attack operations, removed skill barriers, and multiplied the number of active actors. Shared digital infrastructure means that single failure points now produce correlated losses across thousands of policyholders simultaneously. And the capability trajectory of frontier AI points to continued compression of the window in which current assumptions remain valid.

The question posed at the outset — whether frontier AI capability represents a step change in the risk landscape — has a clear answer in the data: yes, in economics and correlation, if not yet in the fundamental nature of the attacks themselves. What that requires of underwriting, pricing and accumulation management is equally clear. The question is pace.

Primary sources: Verizon Data Breach Investigations Report 2024 & 2025; IBM Cost of a Data Breach Report 2024 (Ponemon Institute); Munich Re Cyber Insurance Risks & Trends 2025 & 2026; CyberCube/Munich Re Systemic Cyber Risk Study, July 2025; World Economic Forum Global Cybersecurity Outlook 2024; UK AI Security Institute, frontier AI capability evaluation (April 2026); Anthropic system card disclosure (April 2026); Beinsure US Cyber Insurance Premium Report 2025; Juniper Research supply chain cost projections; EU DORA (in force January 2025); SEC Examination Priorities 2026.

This note is prepared for strategic discussion purposes only. It does not constitute legal, actuarial or regulatory advice. © 2026 Eudaimon Consulting.

Want to discuss the underwriting implications for your portfolio?

Get in Touch