Ask. Learn. Improve
Features
Real EstateData CenterMarketing & SalesHealthcareLegal Teams
How it worksBlogPricingLets TalkStart free
Start free
Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

← Back to Blog
Industry News

The Anthropic Ban Asks Every CIO a Question They Haven't Answered

The Pentagon's Anthropic ban is being covered as a vendor risk story. It's actually a knowledge continuity story — and every enterprise running AI should take note.

5 min read• March 11, 2026View raw markdown
Enterprise AIAI Vendor RiskKnowledge ManagementRAGAI Governance

On March 11, an internal Pentagon memo ordered military commanders to actively remove Anthropic from key systems. Six months earlier, Anthropic had the $200M DOD contract and the distinction of being the first AI deployed in classified Defense networks. Now it carries a national security supply-chain risk designation — the same category historically reserved for Huawei and ZTE. The politics are loud. The question that CIOs should be sitting with is quieter: if this happened to us tomorrow, what would break?

The wrong question is getting all the attention

Most enterprise conversation around this story is about vendor risk. Which AI providers are safe to work with? Which ones have government contracts? Which ones could get blacklisted next?

That's a real concern but it misses the deeper problem.

Vendor risk is about whether you can continue working with a provider. Knowledge continuity risk is about what happens to the institutional intelligence you've built on top of them. They are not the same thing, and only one of them is being discussed anywhere.

Puneet Bhatnagar, former IAM lead at Blackstone, put the right question on the table in GovInfoSecurity: "If we turned this AI system off tomorrow, what would break?" His follow-up was sharper: "It's not just a vendor that is cut off overnight. It's a loss of delegated authority. And AI-based infrastructure is acting as the authority, often on behalf of humans."

The Pentagon built workflows, integrations, and knowledge systems on top of Claude. They went from signed contract to active removal order in six months. This is not a warning about what could happen. It's a description of what just happened to the most security-conscious IT organization on earth. According to CNBC, experts are already worried about the operational fallout of a rapid migration at this scale.

Two architectures, two very different outcomes

There's a fork in the road when organizations deploy AI over their documents. The choice made there determines what happens when a vendor relationship ends abruptly.

Vendor-embedded knowledge: Documents live in a proprietary system. Queries run through the vendor's model. Summaries, extractions, structured answers — all generated and stored in formats tied to that vendor's pipeline. If the vendor disappears, the retrieval layer disappears with it. You still have the raw files. But the indexed knowledge, the queries employees have embedded in their daily workflows, the institutional intelligence built on top of those documents — that's gone. Forrester's Alla Valente described the recovery process accurately: "You need to map out all of the use cases, all of the systems, all of the workflows and all of the decision-making. You don't just rip and replace. There is no big red easy button." (GovInfoSecurity)

RAG-native, vendor-agnostic knowledge: Documents are indexed in a vector database your organization owns. Embedding and retrieval are separate from the model. When the underlying model changes — Claude removed, OpenAI positioned as the replacement — the knowledge base stays intact. The model swaps out. The institutional intelligence doesn't move with it.

The difference is not theoretical. Amazon found out this month that AI systems operating on incomplete or unreliable knowledge bases cause real operational failures. The Anthropic situation adds a new dimension: the knowledge continuity problem doesn't require a technical failure to surface. A political decision six floors up is enough to trigger it.

For most organizations, the architecture choice was made without this scenario in mind. Most defaulted to Architecture A because it was faster to stand up. The Pentagon's situation is a good reason to audit that choice now, before someone else makes the decision for you.

What this means for how you build

This is the conversation where architecture decisions either age well or don't.

The case for separating the knowledge layer from the model layer has always been about accuracy and governance. When AI agents run on uncurated or poorly-managed knowledge, the downstream problems compound as you scale. The Anthropic ban adds a third reason that's harder to dismiss: vendor resilience.

If your documents are indexed in infrastructure you own, your embeddings are yours, and the model is a swappable dependency rather than the system of record, a vendor ban is an operational inconvenience. You migrate the query layer. The knowledge stays. The people who depend on those AI systems lose hours, not months.

Mojar's architecture is built this way by design. The knowledge base — the documents, the embeddings, the retrieval layer — is yours. The model that generates the response is a configurable dependency. When the Pentagon scrambles to move from Anthropic to OpenAI, the effort is enormous because the knowledge was embedded in the vendor's system. That's not a forced connection to Mojar. It's a description of how the architecture problem could have been avoided.

The Anthropic lawsuit may succeed or fail. Microsoft's court support may move the TRO. The courts will decide what they decide. None of that changes the question Bhatnagar raised. Every organization running AI over its institutional documents should be able to answer it today — and the answer should be: "we'd lose the model, but not the knowledge." If that answer isn't available, the architecture has a problem worth fixing before the next memo lands.

Frequently Asked Questions

Knowledge continuity risk is what happens to an organization's institutional intelligence when the AI vendor processing its documents disappears. If your knowledge layer is embedded in a vendor-specific system, losing that vendor means losing structured retrieval, indexed workflows, and the intelligence built on top of raw documents — even if you still have the underlying files.

A RAG-native architecture separates the knowledge layer from the model layer. Documents and embeddings live in a vector database the organization owns. If you swap the underlying model — Claude to OpenAI, or an open model — the knowledge base stays intact. The vendor changes; the institutional intelligence doesn't move with it.

Bhatnagar, a former Blackstone IAM lead, told GovInfoSecurity: 'If we turned this AI system off tomorrow, what would break?' His point: the real risk isn't just losing vendor access — it's losing delegated authority that AI-based infrastructure holds on behalf of humans inside your organization.

Related Resources

  • →Amazon's AI Outage Crisis Isn't an AI Problem — It's a Knowledge Problem
  • →Your AI Agents Have a Credentials Problem — And That's Only Half of It
  • →After March 11, Your AI Chatbot's Wrong Answers Might Be a Federal Compliance Problem
← Back to all posts