Carriers buy "fraud platforms" without naming which layer the platform plays in, which is why two adjacent purchases often overlap on detection and leave investigation untouched. Prevention, detection, and investigation are three distinct anti-fraud layers - different timing, different inputs, different outputs, different owners. Most procurement cycles treat them as synonyms.
The result is predictable. Carriers stack two or three vendors at Layers 1 and 2, then assume Layer 3 is a headcount problem for the SIU. According to the Coalition Against Insurance Fraud, US insurance fraud costs $308 billion a year and roughly 10% of property-casualty losses involve fraud. The flags are being generated. The investigations are not getting done.
This post defines each layer, maps which vendors play in each, and walks through the handoffs where the model breaks. For the wider fundamentals, see our insurance fraud detection pillar; this piece sits underneath it as the framework view.
Why the three layers get conflated
Vendor marketing uses "fraud platform," "fraud solution," and "end-to-end fraud" interchangeably across underwriting, claims operations, and special investigations. A buyer reading three product pages cannot tell whether the vendor stops at scoring a claim or actually resolves it. The shorthand collapses three layers into one budget line.
The mechanical difference is timing. Prevention fires at the policy event (quote, bind, renewal). Detection fires after a claim is filed and produces a score or a referral decision. Investigation fires after a claim has been referred to SIU and produces a written decision with evidence behind it. Each layer answers a different question.
The financial difference shows up at the handoffs. Layer 2 detection produces flags at a 60-85% false positive rate under legacy rules-engines, and Layer 3 manual SIU has the capacity to investigate roughly 25% of what Layer 2 refers. The remaining 75% gets paid, denied without evidence, or quietly closed. That gap is the bottleneck most carriers misdiagnose as "more detection."
The vocabulary trap
A vendor that calls itself an "AI fraud platform" is almost always a Layer 2 tool. The output is a flag or a recommended action, not a closed investigation. Confirm the output type, not the platform name.
Layer 1 - Prevention (before the claim is filed)
Prevention reshapes the policy itself. The carrier uses applicant data plus third-party data to decline a risk, reprice a premium, or add an exclusion before any claim is possible. The owner is underwriting and actuarial, not claims.
Inputs are applicant disclosures, motor vehicle records, prior-loss history, identity and address verification, and risk scoring from providers like LexisNexis Risk Solutions and Verisk's A-PLUS database. Outputs are bind decisions, repriced premiums, and explicit exclusions in the policy form. Core systems like Guidewire and Duck Creek wire these checks into the underwriting workflow; for a deeper view of those platforms see our claims management systems comparison.
Layer 1 failures are silent. The clearest example is workers' compensation premium misclassification - employers under-reporting payroll or misclassifying job codes to pay a lower premium. The Coalition Against Insurance Fraud estimates this leaks $11.7 billion in annual workers' compensation premium misclassification. That is a Layer 1 failure, not a Layer 2 failure. No amount of FNOL scoring catches a policy that was mispriced two years before the claim was filed.
Vendors that play here: LexisNexis Risk Solutions, Verisk underwriting (A-PLUS, LightSpeed), Duck Creek and Guidewire underwriting modules, and FRISS Underwriting. None of them investigate claims. Their output is a risk decision at the policy boundary.
Layer 2 - Detection (after FNOL, before SIU)
Detection scores or flags suspicious claims after they enter the claims system. The output is a risk score and a referral decision - send to SIU or release for payment. The output is not a conclusion. Detection answers "is this worth a closer look," not "did fraud occur."
When it fires: at first notice of loss, then again at major claim events (medical bills posted, supplements added, surveillance triggers). Inputs are the claim file, cross-carrier database matches (Verisk's ClaimSearch is the largest), document submissions, and network or graph signals across providers, attorneys, and claimants. Owners are claims operations and the fraud analytics team that tunes the rules.
The mechanics are rules plus ML scoring plus network analysis. The 60-85% false positive rate lives here. We covered the mechanics in legacy rules vs. autonomous AI fraud detection; the short version is that rules engines fire on patterns, ML scorers fire on similarities, and neither one collects evidence.
The regulatory trigger between Layer 2 and Layer 3 is explicit. California 10 CCR 2698.36 states that the SIU "shall investigate each credible referral of suspected insurance fraud" and defines a credible referral as one that includes a red flag. The red flag is the detection output. The investigation is the next layer.
“The SIU shall investigate each credible referral of suspected insurance fraud... A credible referral... is one that includes a red flag or red flags.”
- California 10 CCR 2698.36
Customer language confirms the handoff. AXA Switzerland, describing its Shift Technology deployment, says the platform helps them "consistently identify suspicious activities at FNOL, and assign the claim to the appropriate expert for investigation." Identify and assign - both Layer 2. The expert investigation is still Layer 3.
Vendors that play here: FRISS, Shift Technology, Verisk ClaimSearch and ClaimDirector, BAE Systems NetReveal, SAS Fraud Framework. Each one produces a flag, a score, or a routing recommendation. None of them produce an audit-ready investigation report.
Layer 3 - Investigation (resolve the flagged claim)
Investigation produces two things: a decision (pay, deny, negotiate, refer to DOI or law enforcement) and an audit-ready record that supports the decision. The work is evidence collection, OSINT, document forensics, provider and claimant background, timeline reconstruction, and (where warranted) interviews. The owner is SIU - or, in many states, an external Insurance Fraud Bureau referral.
The capacity reality at most carriers is brutal. A single investigator holds 200+ cases at a time and closes roughly 10 investigations per month. That means about 25% of Layer 2 referrals actually get a real investigation; 75% are paid, denied with thin support, or quietly closed. The cost per manual investigation runs about $2,500 fully loaded, and each case takes 14+ days. The Insurance Information Institute notes SIU staffing grew only 1.4% from 2021 to 2022 - the bottleneck is structural, not budget cycle noise.
Autonomous Layer 3 changes the unit economics. Hesper AI runs 15+ investigation phases in parallel and closes a case in 2-4 hours at about $150 per case, taking a single investigator from ~10 cases per month to 800+ cases per investigator per month. Coverage on flagged claims moves from 25% to 100%. The investigator's role shifts from execution to decision-making on the cases the agent surfaces.
Layer 3 investigation: manual SIU vs. autonomous agent
Vendors that play here: manual SIU teams (the incumbent at every carrier) and Hesper AI as the autonomous Layer 3 engine. Before signing at this layer, walk through our AI fraud investigation vendor checklist - the diligence questions for Layer 3 are different from the ones for Layer 2.
How the three layers integrate (and where carriers fail)
The failure mode is not inside a layer. Most Layer 1 and Layer 2 vendors do what they sell. The failure is at the handoffs - the connective tissue between layers.
Layer 1 to Layer 2: missed prevention compounds detection load
When underwriting misses a misclassification or accepts a risk it should have repriced, every subsequent claim on that policy enters Layer 2 with a higher baseline risk. The detection engine sees more anomalies, generates more flags, and the queue grows. Carriers respond by tuning thresholds higher, which raises false negatives. The leak compounds.
Layer 2 to Layer 3: regulators mandate the handoff; capacity caps execution
NAIC Insurance Fraud Prevention Model Act (Model 680) requires carriers to maintain an anti-fraud plan covering both detection and investigation, and to report suspected fraud. State regulations like California 10 CCR 2698.36 name the trigger directly: red flag in, SIU investigation out. The policy is clear. The capacity is not. With 200+ cases per investigator and ~10 monthly closures, the 25% coverage ratio is a function of arithmetic, not effort.
Layer 3 back to Layer 1: the broken loop
Investigation findings should feed underwriting. A confirmed staged-loss ring, a corrupt provider, a repeat claimant pattern - these belong in the next renewal cycle's risk model. At most carriers this loop is informal or nonexistent at most carriers. SIU writes a report, the report goes to legal or to the IFB, and underwriting never sees the structured signal. Our deep-dive on Hesper vs. Verisk walks through why an autonomous Layer 3 makes the back-loop feasible: structured output, not narrative PDFs.
What carriers run at each layer in 2026 (vendor matrix)
Three rows. Read across to see which vendor families play in each layer and where Hesper sits.
The asymmetry is visible at a glance. Layers 1 and 2 are crowded; every named carrier has at least one vendor at each. Layer 3 is empty except for headcount. That is the unaddressed slot.
The framing matters because the question is not "which Layer 2 vendor should I replace." Hesper is complementary to FRISS, Shift Technology, Verisk, BAE, and SAS - the upstream flag is still what triggers the investigation. The question is whether the carrier wants Layer 3 staffed only with people or with software plus people.
Key takeaways
- Prevention, detection, and investigation are three distinct layers with different timing, inputs, outputs, and owners; conflating them is the most common procurement mistake at carriers.
- Layer 1 prevention reshapes the policy at quote, bind, and renewal; Layer 2 detection flags suspicious claims after FNOL; Layer 3 investigation resolves the flagged claim with an audit-ready record.
- Most carriers buy two or three vendors at Layers 1 and 2 and treat Layer 3 as headcount, which is why only 25% of flagged claims get investigated and 75% are paid, denied, or closed without evidence.
- NAIC Model Act 680 and state regulations like California 10 CCR 2698.36 make the Layer 2 to Layer 3 referral handoff mandatory; the bottleneck is capacity, not policy.
- Hesper sits at Layer 3 as an autonomous investigation engine and is complementary to every Layer 2 vendor; the real decision is whether Layer 3 is staffed only with people or with software plus people.