Ask. Learn. Improve
Features
Real EstateData CenterMarketing & SalesHealthcareLegal Teams
How it worksBlogPricingLets TalkStart free
Start free
Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

← Back to Blog
Industry News

Insurance AI Got Fined $107 Million in January. The Explainability Problem Starts Before the Model.

Insurers faced $107M in AI fines in Q1 2026. Every response focuses on model audits. Nobody's asking what happens when the documents the model reads aren't explainable.

6 min read• March 25, 2026View raw markdown
Insurance AIAI ComplianceExplainabilityKnowledge GovernanceRAGRegulatory

$107 million. That's what AI cost the insurance industry in regulatory fines in January 2026 alone — over $82 million from the New York Department of Financial Services, and $25 million in Georgia, spread across 22 carriers for algorithmic parity violations (Digital Insurance, March 22). Not guidance. Not warnings. Fines, enforced and collected. The regulatory question being asked: can you explain how your AI made this decision?

Every response in the trade press answers that question at the model layer. Audit logs. Training data lineage. Decision trails through the algorithm. That answer is correct as far as it goes. For insurers running RAG-grounded AI — systems that retrieve your actual policy documents and underwriting guidelines before generating answers — it doesn't go far enough.

The deployment-oversight gap is getting expensive

82% of insurers are now deploying generative AI, according to Deloitte's 2025 Global Insurance Outlook. The oversight infrastructure hasn't kept pace. Carriers rolling out claim-triage tools and underwriting decision support are operating in a regulatory environment that is accelerating in complexity faster than compliance teams can track.

The NAIC released its Model Bulletin on AI in December 2023. Fifteen months later, only 24 of 50 states have adopted it — and most of those did so with their own modifications (NAIC Adoption Tracker, Q1 2026). There is no federal standard. What Colorado requires (annual discrimination testing under SB 21-169), what Virginia mandates (not "mitigate" risk — eliminate it, a one-word change that converts a best effort into an absolute mandate), and what New York enforces (Circular Letter No. 1's specific documentation requirements) are three different compliance realities. Maryland and Texas published AI utilization review regulations on March 24, 2026. The state-by-state patchwork is actively growing.

The pace of change makes this structurally difficult to manage: the insurance industry faces over 3,300 regulatory changes per year, according to RegEd (Digital Insurance), with an increasing share AI-specific.

There's a second pressure working at the same time. AI efficiency tools are now generating policy summaries, compliance documents, and customer-facing communications faster than legal and compliance teams can review them. The volume of unstructured content feeding insurance AI systems is growing faster than the governance around it. The knowledge base expands; the oversight structure stays the same size. At some point, the gap between those two trajectories starts to look like evidence of negligence rather than just operational lag.

Why model-layer explainability misses half the question

The standard regulatory response to an adverse AI decision is a model audit trail: here's how the algorithm was trained, here are the features it used, here's the decision logic. For black-box models, getting even this far is a significant compliance lift.

But for RAG-grounded insurance AI, the explainability chain has a second component that current coverage almost entirely ignores.

When an AI denies a claim, it doesn't just run an algorithm. It reads something first. It retrieves specific content from the knowledge base — the relevant underwriting guideline, the current claims policy, the applicable reinsurance covenant. Then it acts on what it read. That means "how did the AI decide?" is only half the question. The other half is: what was the AI reading when it decided?

Consider what regulators actually find in adverse-action reviews. The algorithm is documented. The decision trail exists. But the underwriting guideline the system retrieved hasn't been updated since the risk parameters changed six months ago. The claims policy and the reinsurance covenant say different things about the same coverage category. Neither has a version history showing who approved the current text and when.

The model trail is clean. It points straight to sources that can't be defended.

Snowflake CIO Mike Blandina, who held senior data roles at JPMorgan Chase, PayPal, and Google before his current position, put the structural problem plainly: "A lot of companies are realizing you don't get good AI in the enterprise without good data. Unfortunately, many of them have data islands everywhere" (BizTech Magazine, March 24). Insurers have the same problem. The data islands are policy documents, underwriting guidelines, and compliance materials — unstructured, distributed, and largely ungoverned.

The evidence problem in AI compliance has been building across every regulated sector. In insurance, it has a specific shape: regulators don't just want proof of what the model did — they want a defensible record of what the model was reading when it did it.

Governing the layer regulators haven't codified yet

The model governance response to explainability mandates is maturing. Audit logs, training data documentation, and decision trails are becoming table stakes for insurers in jurisdictions with active enforcement. That work is necessary and most carriers are somewhere in the process.

The layer nobody has formally regulated yet — and that every insurer running RAG-grounded AI needs to be thinking about now — is the unstructured document foundation those models read before they act.

Governed knowledge infrastructure for RAG-grounded AI means concrete things. Source attribution on every AI response: not just "the system cited a policy document," but which version of that document, last updated on which date, reviewed and approved by whom. Contradiction detection: active scanning for conflicting policies across the document library — the same conflicts that surface in regulatory adverse-action reviews. An auditable update trail: when the knowledge changed, what changed, and who authorized it.

Mojar AI provides this at the knowledge layer — source attribution, contradiction detection, and versioned document updates — for the unstructured policy and compliance foundation that RAG-grounded insurance AI reads before it acts. It doesn't replace model-level governance. It completes the explainability chain at the layer regulators will eventually reach, because the current questions will get more granular before they get less.

The $107 million in Q1 fines is the first wave of serious enforcement. Regulators are learning the questions to ask. The question they haven't codified yet is whether you can trace an AI decision all the way back to the specific document the model retrieved — and prove that document was accurate, current, and properly authorized at the time.

The question every insurance CIO should be asking today

When a regulator asks your compliance team to explain an AI denial, can you trace it back to the specific document the model read — and prove that document was accurate and authorized at the moment the decision was made?

If the answer is "we can show you the model trail but we're less certain about the source documents," that's the gap. Based on where enforcement is heading, that gap is going to get more expensive before it becomes any less visible.

Frequently Asked Questions

The New York Department of Financial Services issued over $82 million in fines to insurers for AI-related compliance violations in January 2026, primarily related to algorithmic decision-making that carriers couldn't adequately explain to regulators. The same month, Georgia issued $25 million in penalties against 22 carriers for algorithmic parity violations.

When an AI system denies a claim, regulators ask how the model made that decision. Model-level explainability — training data lineage, decision logic, algorithmic audit trail — covers what the model did. For RAG-grounded AI, there's a second requirement: proving what the model was reading. If the underlying policy documents and underwriting guidelines are stale, contradictory, or unversioned, model-level explainability is incomplete.

As of Q1 2026, only 24 of 50 states have adopted the NAIC Model Bulletin on AI, according to the Quarles adoption tracker. Most modified the text, creating a fragmented compliance landscape with no federal standard. Colorado, Virginia, and New York have each added requirements that go beyond the NAIC baseline in different directions.

Insurers running RAG-grounded AI need source attribution on every AI response (which document version was retrieved, when it was last updated, who authorized it), active contradiction detection across the policy document library, and an auditable update trail for every change to the knowledge base. Without these, the model audit trail points to sources you can't defend.

Related Resources

  • →Guardrails Aren't Enough. Enterprises Need to Prove What Their AI Saw.
  • →In AI Compliance, Speed Is Cheap. Auditable Evidence Is the Product.
  • →Courts Are Not Debating AI Anymore. They Are Debating Provenance.
← Back to all posts