Ask. Learn. Improve
Features
Real EstateData CenterMarketing & SalesHealthcareLegal Teams
How it worksBlogPricingLets TalkStart free
Start free
Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

← Back to Blog
Industry News

Caremark Comes for Your AI

A top-10 law firm just told Fortune 500 boards: AI governance failures are a Caremark liability. Here's the gap every enterprise AI team needs to close.

5 min read• March 12, 2026View raw markdown
AI GovernanceEnterprise AICorporate LiabilityCaremarkKnowledge ManagementBoard Oversight

The AI was authorized. The tool was internal. The team used it to draft talking points for an earnings call — investor questions, climate risk disclosures, statements about SEC compliance. The output looked research-grade: confident, citation-rich, specific to the regulation in question. It was wrong.

When analysts caught the error, the company faced an SEC inquiry, a share price drop, and shareholder scrutiny. No data breach. No shadow IT to blame. A legitimate internal tool, reading documents nobody had verified, producing authoritative-sounding output that went to regulators.

Eight senior partners at Akin Gump Strauss Hauer & Feld — one of the ten largest law firms in the US by revenue — spent serious time analyzing that scenario. Their conclusion, published March 11, 2026: this is a Caremark violation. And it's not a fringe read of the doctrine.

Caremark doesn't care about your model governance stack

The Caremark doctrine has been corporate governance bedrock since 1996. Under Delaware law — where the vast majority of Fortune 500 companies are incorporated — directors can face personal liability for breach of loyalty if they either utterly failed to implement any reporting system for significant business risks, or consciously failed to monitor a system once it existed.

The bar is deliberately high. Directors aren't liable for every bad outcome. They're liable for systemic failure — for running a company without mechanisms to catch the kinds of errors that will eventually cause serious harm.

Akin Gump's argument is that AI is now material enough to trigger that standard. And they name the specific failure mode with precision: the AI produced wrong information because it was reading unverified, potentially stale or contradictory source documents. "Garbage-in, garbage-out" — their phrasing, not ours.

That's a knowledge layer problem, not a model governance problem.

Every enterprise AI governance framework on the market today — ModelOp, OneTrust, AvePoint, Netskope — addresses the same cluster of concerns: who has access to which AI, which models are deployed, how much it costs, whether outputs are being logged. All legitimate. None of it answers Caremark. We've been watching this blind spot grow for months.

Caremark asks whether the board implemented a system ensuring the AI's source materials were reliable and that outputs could be verified. Nobody in that governance stack is doing that.

What the two-part test actually looks like in practice

The Akin Gump analysis frames it plainly: "even otherwise well-governed companies may lack the minimum reporting systems required for boards of directors to satisfy their duty of oversight" on AI.

The first part of the Caremark test — utterly failing to implement any reporting system — applies if your board has no mechanism to know whether the documents your AI reads are current, consistent, and verified. Most boards don't. They have model risk committees and shadow IT policies. Almost none have document knowledge governance as a distinct board-level function.

The second part — consciously failing to monitor an established system — is trickier. If your AI governance framework excludes knowledge layer oversight, and the board knew the AI was producing compliance-sensitive outputs, monitoring a knowingly incomplete system probably doesn't save you.

This isn't pure hypothetical territory. The SDNY has already sanctioned attorneys for filing AI-generated briefs with hallucinated citations. The FTC has flagged AI chatbots giving consumers factually wrong legal and compliance information — and the enforcement logic there applies equally to internal enterprise deployments. The question Akin Gump forces is: what does a reasonable board-level oversight system for AI actually look like?

Walk backwards from the failure

The company in Akin Gump's scenario got wrong SEC compliance language because the AI synthesized it from documents that hadn't been verified against current regulatory requirements. To prevent that:

Someone — or something — needed to confirm those source documents reflected current SEC rules. When requirements changed, the knowledge base needed to update. If two internal policy documents contradicted each other on disclosure requirements, that conflict needed to be resolved before the AI merged them into an earnings call response. Every output needed to trace back to a specific, verifiable document, so humans could check the work before it went anywhere sensitive.

That's source attribution, contradiction detection, automatic knowledge base maintenance, and verifiable audit trails. These are the components of what a Caremark-compliant AI knowledge system would need to include. They're not features in your access control platform or your model monitoring dashboard. They sit in the document intelligence layer — the part of the stack that almost nobody has treated as a governance requirement yet.

The architecture exists. The question is whether boards classify it as optional infrastructure or board-level necessity.

The regulatory vacuum makes this harder, not easier

Akin Gump notes that AI regulation "is fractured and lags behind this quickly evolving technology." That sentence deserves more attention than it's getting.

There's no federal AI standard where, if met, a board can point to it as a Caremark safe harbor. No audit committee charter template that covers AI knowledge governance. No regulatory floor above which directors can feel confident they've discharged their duty of oversight on this.

That absence doesn't mean directors are off the hook. Under Caremark, it means they have to reason from first principles about what "reasonable" oversight looks like — which, Akin Gump argues, now includes knowing what your AI is reading and whether it can be trusted.

The risk isn't the jailbreak

The risk isn't an employee going rogue with an unauthorized tool. The risk is that your authorized AI, running exactly as designed, reads a compliance document from 18 months ago — never updated, contradicted by two other internal policies nobody reconciled — and produces a statement that lands in an earnings call, an investor briefing, or a regulatory response.

Nobody hacked anything. Nobody bypassed a control. The board just hadn't built a system to ensure the knowledge layer was accurate. Under Caremark, that's the question. And right now, most boards can't answer it.

Related Resources

  • →The AI Governance Blind Spot: Knowledge Accuracy Isn't on Anyone's Roadmap
  • →When Your AI Compliance Chatbot Is Just Wrong
← Back to all posts