Back to Insights
AI Risk · InsuranceMarch 2026
6 min read

From Cyber Liability to Algorithmic Malpractice: How Insurers Can Underwrite AI Risk in 2026

The insurance industry spent two decades building a playbook for cyber risk. That playbook is now being challenged by AI. The shift is not gradual — it is structural.

The insurance industry has a well-established framework for cyber risk: adversarial intrusion, data breach, ransomware, business interruption. Underwriters know the exposure units, the severity distributions, and the accumulation dynamics. That framework, built painstakingly over twenty years, is now being challenged by a fundamentally different risk class.

Artificial intelligence risk — what we term Algorithmic Malpractice — does not behave like cyber risk. It is not episodic and adversarial. It is continuous and endogenous. Understanding that distinction is the starting point for every insurer navigating this transition.

The incidents that changed the market

McKinsey — indirect prompt injection

An internal AI system was compromised through malicious instructions embedded in documents the AI was asked to summarise. Sensitive data was exfiltrated. The failure was not in the perimeter — it was in the AI decision layer itself.

Amazon — hallucinated security vulnerabilities

AI-generated code introduced subtle backdoor vulnerabilities into AWS internal tools. The failures were logically coherent but architecturally insecure — invisible to standard automated scanners. Driven by output quotas, engineers had begun rubber-stamping AI outputs.

Air Canada — legal liability for AI hallucinations

A tribunal ruling established that organisations are legally responsible for incorrect outputs generated by their AI systems. If your chatbot promises coverage that does not exist, your E&O exposure is absolute. This precedent has material implications for every carrier deploying AI in customer-facing roles.

"The primary risk is no longer external intrusion. It is failure within the AI decision layer itself."

Cyber risk vs AI risk — a structural comparison

These are not two versions of the same problem. They require different underwriting frameworks, different exposure units, and different accumulation models.

Dimension

Cyber risk

AI / algorithmic risk

Nature

Episodic and adversarial

Continuous and endogenous

Origin

External threat actors

Internal system failures

Frequency

Discrete events

Scales with usage volume

Detection

Often rapid — breach alerts

Can propagate silently

Accumulation

Infrastructure / cloud outages

Shared models and providers

Liability

Data protection, GDPR

Advice, decisions, outputs

Exposure unit

Revenue, records, devices

Model queries, decisions made

Rethinking exposure: activity-based underwriting

Traditional insurance metrics — revenue, employees, locations — are poor proxies for AI risk. Exposure is more closely linked to interaction volume: the number of model queries, automated decisions, and customer-facing outputs generated by the system.

This reflects a key mathematical property of AI systems. If the probability of a material failure per interaction is even a fraction of a percent, the probability of at least one failure across millions of interactions approaches certainty. Risk scales with usage, not with the size of the organisation.

Probability of failure formula

Pf = 1 − (1 − Pe)ⁿ

Where Pe = per-interaction error probability and n = number of interactions. At Pe = 0.0001% and n = 10,000,000 interactions per year, Pf approaches 1.0 — mathematical certainty of at least one major incident annually.

Three loss categories insurers must price

01

Isolated errors

Individual incorrect outputs or decisions affecting a single transaction or customer interaction. Highest frequency, lowest severity. Similar in profile to standard professional indemnity claims.

02

Embedded failures

Errors incorporated into operational workflows or automated rules, affecting multiple outcomes before detection. The most dangerous category — a single defect can propagate across thousands of transactions silently.

03

Systemic events

Failures that trigger regulatory, legal, or reputational consequences across an organisation or sector. Lowest frequency, catastrophic severity. Potentially correlated across multiple insureds using the same underlying model.

Autonomy level drives risk loading

One of the clearest emerging insights in AI underwriting is that risk varies significantly with the level of autonomy granted to the AI system. The greater the system's ability to act independently, the greater the potential for errors to translate directly into financial loss.

Read-only / advisory

Low

Provides information or recommendations only — human makes final decision

Write access / code generation

Medium–High

Can modify files, generate and deploy code, or alter system configurations

Execution / transaction authority

Very High

Can execute trades, bind contracts, make financial commitments autonomously

Full agentic autonomy

Extreme

Multi-step autonomous action with minimal human oversight or intervention

Accumulation: the balance sheet risk nobody is pricing yet

The most significant challenge for insurers is accumulation risk. A relatively small number of AI providers — OpenAI, Google, Anthropic, Microsoft — underpin the vast majority of enterprise AI deployments. This creates correlated exposure that does not exist in cyber in the same form.

A model update, a discovered vulnerability, or a systemic error in decision-making logic at one of these providers could simultaneously affect thousands of insureds. Unlike a cloud outage, this form of accumulation may be invisible until losses begin to crystallise.

Insurers need to track accumulation across three dimensions:

Underlying model or provider

Which foundation model powers the system? OpenAI, Google, Anthropic, open-source? Correlated exposure across insureds using the same provider.

Application architecture

Is the AI embedded in core processes or peripheral tasks? RAG system or fine-tuned model? Architecture determines propagation risk.

Use-case criticality

Is the AI making financial, legal, or medical decisions? Or supporting lower-stakes workflows? Criticality determines severity potential.

Underwriting through product design

In AI risk, underwriting cannot rely solely on risk selection and pricing. It must be reinforced through policy design and enforceable controls. Four elements are becoming non-negotiable.

Requirement 01

AI Bill of Materials (AIBOM)

Full visibility over every model and dependency used by the insured — including third-party APIs, fine-tuned models, and retrieval layers. Without this, underwriters are pricing a black box.

Requirement 02

Human-in-the-loop for critical decisions

Mandatory human oversight for financial, legal, or customer-facing commitments. The Air Canada ruling makes clear that this is not optional — it is the line between insurable and uninsurable.

Requirement 03

Input validation and prompt controls

AI systems must treat all external inputs as potentially untrusted data — scrubbed and validated before processing. The McKinsey incident demonstrates exactly what happens when they do not.

Requirement 04

Ongoing monitoring and update obligations

Static policy controls are insufficient. AI systems evolve. Policy structures must incorporate obligations to maintain, monitor, and update safeguards over time — not just at inception.

Three conclusions for insurers

01

Redefine exposure

AI risk scales with activity, not organisational size. Move from revenue-based exposure units to interaction volume, decision frequency, and autonomy level. The underwriting question is not how big is this company — but how many AI decisions does it make per day.

02

System design drives loss

Architecture, controls, and autonomy level are central to the risk profile. Without visibility into these elements — through pre-bind technical assessments and AIBOM requirements — pricing will be structurally imprecise. Carriers that invest in this capability will select and price better.

03

Correlation is the critical unknown

Shared models and infrastructure introduce a new form of accumulation risk that is not yet fully understood but is likely to be material. Carriers need to track underlying model exposure across their portfolios — not just insured names — before this risk crystallises at scale.

Summary

The transition from cyber liability to algorithmic malpractice is not a gradual evolution. It is a structural shift in how operational risk manifests in a digital economy. AI risk cannot be treated as an extension of cyber. It requires its own underwriting discipline, grounded in technical understanding, structured data, and explicit control of accumulation. Those who build that capability early will be better positioned to navigate what is likely to become a defining liability class of the next decade.

Eudaimon Consulting, March 2026. For general information purposes only. Eudaimon Consulting makes no representations or warranties of any kind about the completeness or accuracy of this article. Any reliance placed on this information is strictly at your own risk.

Want to discuss your AI risk strategy?

Talk to Eudaimon Consulting.

Get in Touch