When a CIO at a top-25 P&C carrier opens the security review for an AI claims-investigation vendor, the first question is rarely "do you have SOC 2." It is "what is scoped in your SOC 2, what does your data-handling addendum say, and where in your architecture does our nonpublic information sit." SOC 2 Type II is the floor. The ceiling is built control by control on top of it.
This post is for Priya, the Carrier CIO who reviews after the SIU Director has championed and the CFO has nodded - quickly, in days, but with a hard veto if anything is off. It defines what a SOC 2 Type II report actually attests to under the AICPA Trust Services Criteria, lists the seven data-handling controls she has to verify beyond the report, maps those controls to the state regulatory frame, and gives her a procurement-ready checklist she can hand to a vendor before the security-review meeting.
The frame around the procurement is hardening fast. The NYDFS Second Amendment to 23 NYCRR 500 took effect November 1, 2023, and the NAIC Insurance Data Security Model Law (Model #668) has now been adopted in roughly two dozen states. The carrier is regulated; the AI vendor sits inside that regulation as a third-party service provider. For the broader vendor frame around this security review, see our guide to autonomous AI claims investigation and the parallel Claims VP deployment playbook.
The Carrier CIO's security bar for AI fraud investigation
Priya is reviewing the vendor for a specific reason. The SIU is the bottleneck. Manual investigation takes 14+ days per case, the team covers only ~25% of flagged claims, and the board has noticed. The AI vendor closes that gap by running investigations in 2-4 hours and lifting coverage to 100% of flagged claims. Her job is to confirm that closing the coverage gap does not open a security or regulatory gap.
Her bar has three parts. First, SOC 2 Type II in hand, with the audit period current and the scoped criteria readable. Second, a specific set of data-handling controls beyond SOC 2 documented in the security addendum and provable on architecture review. Third, a documented map from the vendor's controls to the state regulatory frame the carrier files under. Each part fails independently. A vendor with a clean SOC 2 but no model-training-boundary clause fails. A vendor with strong controls but no NAIC Model #668 awareness fails.
Integration shape and security posture move together in this review. Priya looks at where the data sits before she looks at the architecture diagram, and the two conversations happen in the same meeting. See hidden integration costs for legacy claims AI for why the integration shape itself surfaces security questions a generic SaaS review will miss.
What SOC 2 Type II actually proves (and does not)
A SOC 2 Type II report attests that controls at a service organization operated effectively over a stated period - usually 6 to 12 months - against one or more of the five AICPA Trust Services Criteria. The five criteria, per the AICPA SOC 2 overview, are Security, Availability, Processing Integrity, Confidentiality, and Privacy. Only Security is mandatory in every SOC 2 audit. The other four are optional, scoped in at the service organization's election.
This is the single most misread fact in vendor-security procurement. A vendor can hold a current SOC 2 Type II certificate that scoped only Security. That certificate is real and the report is genuine, but it says nothing about how the vendor handles confidential data, how it protects privacy, or how it ensures processing integrity. Priya reads the scope page, not the cover.
Type I versus Type II also matters. A SOC 2 Type I report attests that controls were designed appropriately at a single point in time. Type II attests that those controls operated effectively over the audit period. For an enterprise AI fraud investigation procurement, Type II is the bar because investigation workflows handle nonpublic information continuously. A Type I in hand without a Type II in flight is a vendor that is still maturing its security program.
Read the scope, not the cover
A "SOC 2 Type II" attestation is only as broad as the Trust Services Criteria it scoped in. For an AI claims-investigation vendor, Confidentiality is non-negotiable, Privacy is required when PHI flows through, and Processing Integrity is the criterion that maps to the audit-trail substrate Lin will inspect during her antifraud-plan review.
Seven data-handling controls beyond SOC 2
SOC 2 frames the question. The answer is in the specific control set. Seven controls survive the procurement conversation when nonpublic information flows from a Carrier CIO's environment into an AI investigation vendor: encryption at rest and in transit, tenant isolation, data residency, PII minimization, the model-training boundary, sub-processor disclosure (with breach-notification SLA), and retention and deletion. Each one connects to a specific clause in the state regulatory frame and to a specific question the carrier's GC will ask if a case ever reaches deposition.
The model-training-boundary clause is the single most contested line in the contract. Many AI vendors have default terms that allow customer data to be used for model improvement. That is unworkable for a P&C carrier because claim data is nonpublic information under NYDFS 500 and Model #668, and it may include protected health information under HIPAA. The procurement-ready posture is: customer claims data is not used for cross-customer training by default. If the vendor cannot answer this in writing, the review pauses.
For a broader vendor-evaluation rubric that includes commercial, integration, and operational questions alongside this security set, see our vendor-evaluation checklist for AI fraud investigation. This post is the security-specific subset.
NAIC Model #668 and NYDFS 23 NYCRR 500: the state regulatory frame
A Carrier CIO is deploying inside a regulated frame. The NAIC adopted the Insurance Data Security Model Law (Model #668) in 2017, designed to align with NYDFS 23 NYCRR 500. It has been adopted in roughly two dozen states since. Both Model #668 and NYDFS 500 require insurance licensees to oversee third-party service providers handling nonpublic information, encrypt that information, and notify the regulator within 72 hours of a determined cybersecurity event. An AI fraud investigation vendor is a third-party service provider under both frames.
The third-party oversight requirement lives in 23 NYCRR 500.11. It mandates a written policy with four elements: identification and risk assessment of third-party providers, a due-diligence process for selecting them, minimum cybersecurity practices required of providers, and periodic reassessment. The vendor security addendum has to map cleanly to those four elements. The NYDFS Second Amendment, effective November 1, 2023, tightened MFA, encryption, and breach-notification expectations across the board.
Model #668 §4 requires every licensee to submit an annual written certification of compliance to the state insurance commissioner by February 15. The vendor's security posture and data-handling controls feed directly into that certification. If the vendor cannot produce evidence on a 30-day cycle - audit reports, sub-processor changes, breach-notification logs - the carrier's own filing weakens.
The audit substrate matters as much as the policy. Hesper is built audit-trail-native: 15+ investigation phases run in parallel on every flagged claim and each phase logs sources, reasoning, and timestamps. That per-case, per-action evidence chain maps directly to California 10 CCR 2698.36's documented-decision requirement and to the documented investigation history a state DOI may pull on audit. For where in the stack this sits, see prevention vs detection vs investigation. Detection vendors emit a score; an investigation vendor emits a reconstructable evidence chain.
SOC 2 Type II vs additional Carrier CIO requirements (typical coverage)
HIPAA when claims include medical records
Workers compensation, auto bodily injury, life, and health claims routinely include medical records. The moment protected health information flows through an AI vendor's systems, the vendor is a Business Associate under the HIPAA Security Rule and a written Business Associate Agreement is required before the data flows. This is not a negotiation item. If the vendor will not sign a BAA, the deal stops there for any line that touches medical records.
The BAA obligates the vendor to implement the administrative, physical, and technical safeguards required under the Security Rule, to report breaches to the covered entity, and to require equivalent obligations of any sub-processor. The vendor's sub-processor list is a hard input here. A cloud provider, an OSINT data vendor, or a model-inference provider that touches PHI is a sub-business-associate and needs its own BAA chain.
GLBA sits underneath HIPAA as a broader federal floor on insurer data handling. The state frame - NAIC Model #668, NYDFS 500, California 10 CCR 2698.36 - layers on top. The vendor's data-handling controls need to satisfy all four simultaneously, which is why a generic SaaS security posture is not sufficient for a carrier deployment.
The procurement-ready CISO checklist
Priya hands this list to the vendor for written response before the security-review meeting. The vendor that answers cleanly in writing is the vendor that survives the review.
- Current SOC 2 Type II report, audit period within the last 12 months, with next audit period scheduled.
- Trust Services Criteria scoped in: Security plus Confidentiality at minimum, Privacy if PHI flows, Processing Integrity if audit trail is part of the procurement value.
- Published list of sub-processors with what each one processes and where, plus 30-day change-notification commitment.
- Training-data policy in writing: customer claims data is not used for cross-customer model training by default, contract clause confirming.
- Data-residency map: US-region pinning available, regional architecture diagram, no required egress to non-US regions for claim data or OSINT processing.
- Tenant-isolation architecture: per-tenant key separation, no shared model fine-tunes across customers, written description.
- Encryption specifications: AES-256 at rest, TLS 1.2+ in transit, customer-managed key option, key-rotation policy.
- HIPAA BAA willingness in writing if any line of business carries PHI.
- Breach-notification SLA: vendor notifies carrier within 24-48 hours of a confirmed cybersecurity event, in writing, with scope assessment and named-contact protocol.
- Retention and deletion policy: configurable retention, contractual deletion-on-termination, written certification of deletion.
- Evidence of NYDFS 23 NYCRR 500 or NAIC Model #668 awareness in the vendor's security program (mapping document, not just a marketing statement).
- Per-case audit-trail architecture: how investigation actions are logged, retained, and produced on regulatory request.
The last item on the list is the one most generic AI vendors fumble. Hesper emits a per-case audit trail because the investigation architecture is built that way: 15+ phases run in parallel on every flagged claim and each phase logs its sources, reasoning, and timestamps. The audit trail is not a feature added on top of the product. It is the product.
Key takeaways
- SOC 2 Type II is the floor for an AI fraud investigation procurement, not the ceiling, and the report only attests to controls against the Trust Services Criteria the vendor scoped in.
- The seven data-handling controls a Carrier CIO has to verify beyond SOC 2 are encryption, tenant isolation, residency, PII minimization, model-training boundary, sub-processor disclosure plus breach SLA, and retention and deletion.
- NYDFS 23 NYCRR 500 and NAIC Model #668 both treat an AI investigation vendor as a third-party service provider and put a 72-hour breach-notification obligation on the carrier, so the vendor's SLA has to be tighter than that window.
- When claims include medical records, a HIPAA Business Associate Agreement is a hard prerequisite, not a negotiation item, and the sub-processor chain has to carry its own BAAs.
- An AI investigation vendor that logs sources, reasoning, and timestamps per phase satisfies California 10 CCR 2698.36's documented-decision requirement without a separate compliance project.