A modern SIU runs on three vendor layers, and most carriers only pay for two of them. The detection layer fires alerts. The case management layer holds the file. The investigation layer in the middle - the actual work of checking facts, pulling records, building a timeline, and writing a defensible report - sits almost entirely on human investigators with browser tabs open. That gap is the bottleneck behind every "we flagged it but never closed it" claim.
This post maps the 2026 SIU stack across all three layers, names the vendors that occupy each, and explains why the middle layer is the one most teams have to build out next. The detection market is mature. The case management market is mature. The investigation layer is where the procurement conversation is moving in 2026, partly because regulators have started to ask harder questions about what carriers actually do with their fraud alerts. For the broader operations context, see our companion piece on why most flagged claims never get investigated, and for the end-to-end view of modernizing SIU operations, our pillar guide walks through staffing, workflow, and audit posture alongside the vendor stack covered here.
The three-layer model of the SIU stack
A useful way to read the SIU technology market is to separate signal from evidence from record. Detection generates signal. Investigation produces evidence and a decision. Case management holds the record. These are different problems with different vendor categories, and the failure mode at most carriers is treating "fraud platform" as a single product when it is three.
According to the Insurance Information Institute, about 90% of insurers use technology primarily to detect claims fraud, citing the Coalition Against Insurance Fraud and SAS State of Insurance Fraud Technology survey. Almost all of that adoption is concentrated in layer 1 (detection) and layer 3 (case management). The layer in between is where investigators still keep eight tabs open and a Word document on a second monitor.
The numeric consequence is well understood. Manual investigators close roughly 10 cases per month. Caseloads sit at 200+ per investigator. The arithmetic forces triage: only 25% of flagged claims get a real investigation. The other 75% close on their face value, get paid, or get denied without a documented investigation - which is a different kind of risk. AI investigation agents change the input variable: 800+ cases per investigator per month at $150 per case instead of ~$2,500.
Layer 1 - Detection: catching the signal
Detection produces alerts. It does not produce conclusions. A false positive rate of 60-85% is a property of rules-based scoring, not a vendor defect. Even the better predictive models - and the predictive-model adoption curve has been steep, per CAIF's biennial survey - flag many claims that turn out clean once an investigator pulls the records. The job of layer 1 is to make sure the suspect claims are in the queue, not to decide which ones to deny.
The detection vendor landscape in 2026 has three recognizable categories. There is the policy-lifecycle scorer, the AI-driven decision support platform, and the cross-carrier database utility.
Policy-lifecycle scorers
FRISS is the canonical example, scoring quote, renewal, and claim activity for misrepresentation and fraud signals. The output is a risk score plus structured fact-building screens for human investigators. This pattern complements an investigation layer. It does not replace one.
AI-driven decision support
Shift Technology runs across underwriting and claims and reports more than 4 billion policies and claims analyzed across 350 million policyholders, per company materials. Their messaging in 2025-2026 emphasizes "AI agents" across fraud, coverage and liability, subrogation, and injury. The volume claim is detection-centric: cross-carrier intelligence at scale.
Cross-carrier data utilities
Verisk ClaimSearch (formerly ISO ClaimSearch) is the industry database of record for claims. CCC and Tractable contribute photo-based estimating and auto fraud signals. These are infrastructure layers; every detection vendor and most investigation tools query them. They are complementary to everything else in the stack.
Detection is upstream; investigation is downstream
A 60-85% false positive rate is not a reason to abandon detection. It is a reason to invest in the layer that processes the alerts. Without a downstream investigation capacity that scales, detection accuracy converts directly into investigator backlog.
Layer 2 - Investigation: the bottleneck nobody bought a tool for
Most carriers have no investigation-layer software. They have investigators, and the investigators have browser bookmarks. A typical SIU desk in 2026 still routes a flagged claim through LexisNexis, Verisk ClaimSearch, Google Maps, Carfax, NMVTIS, social media, internal claim notes, sometimes Westlaw, and a forensic accountant on contract. The output is a Word document that a supervisor reviews. This is the layer that consumes the 14+ days.
The investigation layer is where Hesper AI sits. Investigation agents run 15+ phases in parallel - identity verification, prior-loss history, social and public records, medical provider review, vehicle history, location and weather forensics, statement consistency, and damage versus mechanism analysis among them. The output is an audit-ready report in 2-4 hours per case. We covered the parallel-processing architecture in how parallel investigation phases compress 14 days into hours.
Layer 1 vs Layer 2 economics, per investigator
The structural argument for a dedicated investigation layer is not that humans are slow. It is that the investigation work is parallelizable and the human workflow is serial. A single claim has 15+ independent verification tasks. A human runs them in sequence across days. An agent runs them concurrently in a window measured in minutes.
“Detection vendors hand the investigator a flagged case. Hesper hands the investigator a finished, audit-ready report. The investigator's role shifts from execution to decision-making.”
- Hesper AI product research, Q2 2026
Layer 3 - Case management: the system of record
Case management is where the file lives, but it is not where the investigation happens. Guidewire ClaimCenter is the dominant P&C core suite; Guidewire describes the product as governing the entire claims lifecycle, which is exactly the right framing for a system of record. Duck Creek Claims is the cloud-native challenger. Sapiens ClaimsPro and Sapiens IDIT cover mid-market and international. Majesco Claims, ICE Claims, and Snapsheet round out the field.
The category error is treating these systems as investigation engines. They are not. They hold tasks, statuses, notes, payments, and audit trails. They host SIU referral workflows. The decision logic that fills those screens still has to come from somewhere, and "somewhere" has historically been an investigator on the keyboard. Adding AI modules to the core suite (Guidewire Predict, vendor marketplace apps) helps at the margins but does not replace a dedicated investigation layer - a dynamic we have written about in the hidden integration costs of bolting AI onto legacy claims platforms.
The right architectural question is not "does my claims core do fraud?" The question is "where does the audit-ready investigation report land in my case file, and is the chain of evidence intact?" Hesper writes that report into the case file in the carrier's existing system. We do not replace the file.
How the layers actually integrate (and where the seams break)
The handoffs are the failure points. Detection fires an alert. The alert lands in case management as an SIU referral. An investigator opens the file - then leaves the system to do the actual work. They check ClaimSearch in one tab, LexisNexis in another, Google Maps in a third, the carrier's prior-claim database in a fourth, and they paste findings back into the case notes. That round-trip is where the 14+ days live.
There are three integration patterns in market. API-first integration (case management calls the investigation service when a referral is created and receives the completed report). Marketplace apps (Guidewire Marketplace, Duck Creek Content Exchange) that ship pre-built connectors. Flat-file or batch handoff for older deployments. The pattern matters because the audit trail has to survive every hop. A defensible report is one that has provenance for every fact, and provenance is a property the case management system has to preserve.
Report writing alone consumes 4-8 hours per case in manual workflows, before the investigator does any actual investigating. That is the most visible inefficiency, but the deeper one is that 75% of flagged claims never get a documented investigation. This is the gap Hesper closes by automating the middle layer rather than adding another flag-generation tool to layer 1.
California's SIU regulation as the floor, not the ceiling
The 2026 stack has to satisfy compliance hooks at the investigation layer, which is why audit trail and evidence chain matter as much as throughput. California Code of Regulations Title 10, Sections 2698.30 through 2698.43 set the modern baseline. The California Department of Insurance SIU Compliance Unit evaluates approximately 1,100 insurers each year against written-procedure, training, and annual-reporting requirements. 10 CCR 2698.39 requires five hours of annual continuing anti-fraud training for SIU personnel.
Above the state floor sits the NAIC Insurance Fraud Prevention Model Act #680 and the NAIC Antifraud Plan Guideline (GDL-1690), which most state DOIs adopt with local modifications. The shared structure is consistent: written antifraud plan, defined SIU function, training requirements, annual reporting, and demonstrable investigation of suspected fraud. "Demonstrable" is the operative word - the regulator is asking what was investigated, not just what was flagged.
This is why an investigation layer that produces an audit-ready report with full evidence chain is not a nice-to-have. It is the artifact regulators will eventually ask to see when they audit the SIU function. Detection accuracy alone does not satisfy this requirement, because the regulator does not care how many alerts fired - they care what was done about them.
A reference 2026 SIU stack
Below is a vendor-agnostic reference stack for a mid-size P&C carrier that is moving to 100% investigation coverage on flagged claims. The detection vendor and the case management system are likely already in place. The investigation layer and the data services that feed it are where 2026 procurement is concentrated.
The procurement implication is that the investigation layer is the new line item. For a buying framework, see our 12-point checklist for evaluating AI fraud investigation vendors, which works through audit trail, integration model, evidence chain, and pricing alongside throughput claims.
The carriers that will out-investigate their peers in 2027 are not the ones with the most accurate detection model. They are the ones who treated investigation as a buyable layer instead of a permanent headcount problem.
Key takeaways
- The SIU technology stack is three layers - detection, investigation, case management - and most carriers have only bought two of them.
- Detection vendors like FRISS, Shift Technology, and Verisk generate alerts; their 60-85% false positive rate is a property of rules-based scoring, not a vendor defect.
- The investigation layer is where 14+ days disappear, because most carriers have no software here at all - just investigators with browser tabs.
- Case management systems like Guidewire and Duck Creek hold the file but do not run the investigation; treating them as investigation engines is a category error.
- NAIC Model Act 680 and California's 10 CCR 2698 set the audit and reporting floor that the investigation layer has to satisfy, which is why audit-ready output - not just detection accuracy - is the buying criterion.