Hesper AI
BlogFrameworks
FrameworksMay 11, 2026·10 min read·Nitish Badu, COO

Fraud prevention vs. detection vs. investigation: the three-layer model

Prevention, detection, and investigation are three distinct anti-fraud layers with different inputs, outputs, and owners. Here is the matrix carriers should buy against.

$308B
Annual US insurance fraud loss
Coalition Against Insurance Fraud
60-85%
Layer 2 false positive rate
rules-based detection
25%
Flagged claims actually investigated
Layer 3 manual capacity ceiling
1.4%
SIU staffing growth 2021-2022
Insurance Information Institute

Carriers buy "fraud platforms" without naming which layer the platform plays in, which is why two adjacent purchases often overlap on detection and leave investigation untouched. Prevention, detection, and investigation are three distinct anti-fraud layers - different timing, different inputs, different outputs, different owners. Most procurement cycles treat them as synonyms.

The result is predictable. Carriers stack two or three vendors at Layers 1 and 2, then assume Layer 3 is a headcount problem for the SIU. According to the Coalition Against Insurance Fraud, US insurance fraud costs $308 billion a year and roughly 10% of property-casualty losses involve fraud. The flags are being generated. The investigations are not getting done.

This post defines each layer, maps which vendors play in each, and walks through the handoffs where the model breaks. For the wider fundamentals, see our insurance fraud detection pillar; this piece sits underneath it as the framework view.

Why the three layers get conflated

Vendor marketing uses "fraud platform," "fraud solution," and "end-to-end fraud" interchangeably across underwriting, claims operations, and special investigations. A buyer reading three product pages cannot tell whether the vendor stops at scoring a claim or actually resolves it. The shorthand collapses three layers into one budget line.

The mechanical difference is timing. Prevention fires at the policy event (quote, bind, renewal). Detection fires after a claim is filed and produces a score or a referral decision. Investigation fires after a claim has been referred to SIU and produces a written decision with evidence behind it. Each layer answers a different question.

The financial difference shows up at the handoffs. Layer 2 detection produces flags at a 60-85% false positive rate under legacy rules-engines, and Layer 3 manual SIU has the capacity to investigate roughly 25% of what Layer 2 refers. The remaining 75% gets paid, denied without evidence, or quietly closed. That gap is the bottleneck most carriers misdiagnose as "more detection."

The vocabulary trap

A vendor that calls itself an "AI fraud platform" is almost always a Layer 2 tool. The output is a flag or a recommended action, not a closed investigation. Confirm the output type, not the platform name.

Layer 1 - Prevention (before the claim is filed)

Prevention reshapes the policy itself. The carrier uses applicant data plus third-party data to decline a risk, reprice a premium, or add an exclusion before any claim is possible. The owner is underwriting and actuarial, not claims.

Inputs are applicant disclosures, motor vehicle records, prior-loss history, identity and address verification, and risk scoring from providers like LexisNexis Risk Solutions and Verisk's A-PLUS database. Outputs are bind decisions, repriced premiums, and explicit exclusions in the policy form. Core systems like Guidewire and Duck Creek wire these checks into the underwriting workflow; for a deeper view of those platforms see our claims management systems comparison.

Layer 1 failures are silent. The clearest example is workers' compensation premium misclassification - employers under-reporting payroll or misclassifying job codes to pay a lower premium. The Coalition Against Insurance Fraud estimates this leaks $11.7 billion in annual workers' compensation premium misclassification. That is a Layer 1 failure, not a Layer 2 failure. No amount of FNOL scoring catches a policy that was mispriced two years before the claim was filed.

Vendors that play here: LexisNexis Risk Solutions, Verisk underwriting (A-PLUS, LightSpeed), Duck Creek and Guidewire underwriting modules, and FRISS Underwriting. None of them investigate claims. Their output is a risk decision at the policy boundary.

Layer 2 - Detection (after FNOL, before SIU)

Detection scores or flags suspicious claims after they enter the claims system. The output is a risk score and a referral decision - send to SIU or release for payment. The output is not a conclusion. Detection answers "is this worth a closer look," not "did fraud occur."

When it fires: at first notice of loss, then again at major claim events (medical bills posted, supplements added, surveillance triggers). Inputs are the claim file, cross-carrier database matches (Verisk's ClaimSearch is the largest), document submissions, and network or graph signals across providers, attorneys, and claimants. Owners are claims operations and the fraud analytics team that tunes the rules.

The mechanics are rules plus ML scoring plus network analysis. The 60-85% false positive rate lives here. We covered the mechanics in legacy rules vs. autonomous AI fraud detection; the short version is that rules engines fire on patterns, ML scorers fire on similarities, and neither one collects evidence.

The regulatory trigger between Layer 2 and Layer 3 is explicit. California 10 CCR 2698.36 states that the SIU "shall investigate each credible referral of suspected insurance fraud" and defines a credible referral as one that includes a red flag. The red flag is the detection output. The investigation is the next layer.

The SIU shall investigate each credible referral of suspected insurance fraud... A credible referral... is one that includes a red flag or red flags.

- California 10 CCR 2698.36

Customer language confirms the handoff. AXA Switzerland, describing its Shift Technology deployment, says the platform helps them "consistently identify suspicious activities at FNOL, and assign the claim to the appropriate expert for investigation." Identify and assign - both Layer 2. The expert investigation is still Layer 3.

Vendors that play here: FRISS, Shift Technology, Verisk ClaimSearch and ClaimDirector, BAE Systems NetReveal, SAS Fraud Framework. Each one produces a flag, a score, or a routing recommendation. None of them produce an audit-ready investigation report.

Layer 3 - Investigation (resolve the flagged claim)

Investigation produces two things: a decision (pay, deny, negotiate, refer to DOI or law enforcement) and an audit-ready record that supports the decision. The work is evidence collection, OSINT, document forensics, provider and claimant background, timeline reconstruction, and (where warranted) interviews. The owner is SIU - or, in many states, an external Insurance Fraud Bureau referral.

The capacity reality at most carriers is brutal. A single investigator holds 200+ cases at a time and closes roughly 10 investigations per month. That means about 25% of Layer 2 referrals actually get a real investigation; 75% are paid, denied with thin support, or quietly closed. The cost per manual investigation runs about $2,500 fully loaded, and each case takes 14+ days. The Insurance Information Institute notes SIU staffing grew only 1.4% from 2021 to 2022 - the bottleneck is structural, not budget cycle noise.

Autonomous Layer 3 changes the unit economics. Hesper AI runs 15+ investigation phases in parallel and closes a case in 2-4 hours at about $150 per case, taking a single investigator from ~10 cases per month to 800+ cases per investigator per month. Coverage on flagged claims moves from 25% to 100%. The investigator's role shifts from execution to decision-making on the cases the agent surfaces.

Layer 3 investigation: manual SIU vs. autonomous agent

Coverage of flagged claims - manual SIU25%
Coverage of flagged claims - AI agent100%
Throughput per investigator per month - manual~10 cases
Throughput per investigator per month - AI800+ cases

Vendors that play here: manual SIU teams (the incumbent at every carrier) and Hesper AI as the autonomous Layer 3 engine. Before signing at this layer, walk through our AI fraud investigation vendor checklist - the diligence questions for Layer 3 are different from the ones for Layer 2.

How the three layers integrate (and where carriers fail)

The failure mode is not inside a layer. Most Layer 1 and Layer 2 vendors do what they sell. The failure is at the handoffs - the connective tissue between layers.

Layer 1 to Layer 2: missed prevention compounds detection load

When underwriting misses a misclassification or accepts a risk it should have repriced, every subsequent claim on that policy enters Layer 2 with a higher baseline risk. The detection engine sees more anomalies, generates more flags, and the queue grows. Carriers respond by tuning thresholds higher, which raises false negatives. The leak compounds.

Layer 2 to Layer 3: regulators mandate the handoff; capacity caps execution

NAIC Insurance Fraud Prevention Model Act (Model 680) requires carriers to maintain an anti-fraud plan covering both detection and investigation, and to report suspected fraud. State regulations like California 10 CCR 2698.36 name the trigger directly: red flag in, SIU investigation out. The policy is clear. The capacity is not. With 200+ cases per investigator and ~10 monthly closures, the 25% coverage ratio is a function of arithmetic, not effort.

Layer 3 back to Layer 1: the broken loop

Investigation findings should feed underwriting. A confirmed staged-loss ring, a corrupt provider, a repeat claimant pattern - these belong in the next renewal cycle's risk model. At most carriers this loop is informal or nonexistent at most carriers. SIU writes a report, the report goes to legal or to the IFB, and underwriting never sees the structured signal. Our deep-dive on Hesper vs. Verisk walks through why an autonomous Layer 3 makes the back-loop feasible: structured output, not narrative PDFs.

What carriers run at each layer in 2026 (vendor matrix)

Three rows. Read across to see which vendor families play in each layer and where Hesper sits.

LayerWhenInputOutputOwnerSample vendorHesper coverage
1. PreventionQuote, bind, renewalApplicant + third-party dataDeclined risk, repriced premium, added exclusionUnderwriting + actuarialLexisNexis, Verisk A-PLUS, FRISS Underwriting, Duck CreekNot Hesper's layer
2. DetectionFNOL and claim lifecycle eventsClaim data, cross-carrier matches, documents, networksRisk score, SIU referral decisionClaims ops + fraud analyticsFRISS, Shift, Verisk ClaimSearch, BAE NetReveal, SASComplementary upstream signal
3. InvestigationAfter SIU referralFull claim file + OSINT + documents + interviewsAudit-ready decision and reportSIU (or Hesper)Manual SIU (incumbent); Hesper AI (autonomous)Hesper's layer

The asymmetry is visible at a glance. Layers 1 and 2 are crowded; every named carrier has at least one vendor at each. Layer 3 is empty except for headcount. That is the unaddressed slot.

The framing matters because the question is not "which Layer 2 vendor should I replace." Hesper is complementary to FRISS, Shift Technology, Verisk, BAE, and SAS - the upstream flag is still what triggers the investigation. The question is whether the carrier wants Layer 3 staffed only with people or with software plus people.

Key takeaways

  • Prevention, detection, and investigation are three distinct layers with different timing, inputs, outputs, and owners; conflating them is the most common procurement mistake at carriers.
  • Layer 1 prevention reshapes the policy at quote, bind, and renewal; Layer 2 detection flags suspicious claims after FNOL; Layer 3 investigation resolves the flagged claim with an audit-ready record.
  • Most carriers buy two or three vendors at Layers 1 and 2 and treat Layer 3 as headcount, which is why only 25% of flagged claims get investigated and 75% are paid, denied, or closed without evidence.
  • NAIC Model Act 680 and state regulations like California 10 CCR 2698.36 make the Layer 2 to Layer 3 referral handoff mandatory; the bottleneck is capacity, not policy.
  • Hesper sits at Layer 3 as an autonomous investigation engine and is complementary to every Layer 2 vendor; the real decision is whether Layer 3 is staffed only with people or with software plus people.

Frequently asked questions

No. Prevention reshapes the policy before a claim is filed, and detection flags suspicious claims after FNOL, but neither one resolves a flagged claim. Resolution requires evidence collection, document forensics, OSINT, interviews, and a written decision that holds up in court or before a state DOI. NAIC Model Act 680 and state-level regulations like California 10 CCR 2698.36 explicitly require carriers to investigate credible referrals, not just generate them. Carriers that skip Layer 3 either pay claims they should have denied or deny claims they cannot defend. Both outcomes show up later as leakage or bad-faith exposure. Detection without investigation is unfinished work.

Layer 3. SIU exists to investigate referrals from claims operations, not to score claims at FNOL. In practice, many SIU directors get pulled into Layer 2 work - tuning rules, reviewing false positives, triaging the queue - because the detection layer is noisy and the case management system does not have an autonomous Layer 3 engine behind it. The cleanest division is: claims ops and analytics own Layer 2 (the flag and the referral decision); SIU owns Layer 3 (the investigation and the audit-ready record). If your SIU team is spending most of its time on triage rather than investigation, that is a Layer 2 noise problem leaking into Layer 3.

It depends on the vendor. Shift Technology, FRISS, and Verisk position AI primarily at Layer 2 - scoring, network analysis, FNOL triage, case routing. Their output is still a flag or a recommendation handed to a human investigator. Hesper AI sits at Layer 3 and runs the investigation itself - 15+ phases in parallel, evidence collection, document forensics, audit-ready report - in 2-4 hours instead of 14+ days. The distinction matters because Layer 2 AI compresses the queue but does not raise investigation coverage above 25%. Layer 3 AI is what moves coverage from 25% to 100% of flagged claims.

Most carriers already have vendors at Layers 1 and 2 - LexisNexis or Verisk for underwriting screens, FRISS or Shift or Verisk ClaimSearch for detection. The unaddressed layer is 3, where the incumbent is manual SIU headcount. Manual SIU runs at roughly 10 investigations per investigator per month at a fully loaded cost of about $2,500 per case. That capacity is why only 25% of flagged claims get investigated. A Layer 3 vendor (Hesper) operates at 800+ cases per investigator per month at about $150 per case and closes the coverage gap without expanding headcount. The question is not whether you need three vendors; it is whether Layer 3 is staffed with software or only with people.

Most carriers deploy them in the order they were sold: Layer 1 prevention first (it has been part of underwriting for decades), Layer 2 detection second (the rules-engine and ML wave of the last 15 years), and Layer 3 investigation last - often never, because it gets treated as headcount. The faster ROI path in 2026 is the reverse: if you already have Layer 2 detection running, Layer 3 investigation produces immediate coverage gains on flagged claims you have already paid to identify. Layer 1 improvements compound over years; Layer 3 improvements show up in the next quarter's loss ratio. Sequence the deploy by where the flagged-but-uninvestigated dollars sit.

Regulators do not use the layer language explicitly, but the obligations map cleanly. NAIC Model Act 680 requires carriers to maintain an anti-fraud plan covering prevention and detection. California 10 CCR 2698.36 names the Layer 2 to Layer 3 trigger directly: a red flag is what creates a credible referral, and SIU must investigate it (with documented reasoning if it declines). State Insurance Fraud Bureaus expect carriers to refer suspected fraud upward as well. The regulatory architecture assumes all three layers function. When Layer 3 capacity caps actual investigation at 25%, carriers are technically compliant on referral but materially under-investigated.

Mostly at Layers 1 and 3. Layer 1 leakage is premium misclassification and risk that should have been priced higher or excluded - workers' comp misclassification alone is estimated at $11.7 billion in annual workers' compensation premium misclassification per the Coalition Against Insurance Fraud. Layer 3 leakage is the 75% of flagged claims that get paid without investigation because manual SIU cannot keep up. Layer 2 leakage exists but is smaller and mostly shows up as false negatives - claims that were never flagged. The largest dollar lever for a mid-size carrier in 2026 is closing the Layer 3 coverage gap, because the flags are already being generated and the dollars are sitting in the uninvestigated pile.

← More articles on the Hesper AI blog

See Hesper AI on your documents

Request a demo and we'll run an analysis on your real document samples.