Ask. Learn. Improve
Features
Real EstateData CenterMarketing & SalesHealthcareLegal Teams
How it worksBlogPricingLets TalkStart free
Start free
Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

← Back to Blog
Industry News

The Shared-Context Race Is Becoming Enterprise AI Infrastructure

Multiple vendors converged this week on the same problem: enterprise AI agents operating from different versions of reality. Here's what's actually being built.

7 min read• March 23, 2026View raw markdown
Enterprise AIShared ContextRAGKnowledge ManagementAI Infrastructure

One week. Three announcements. Same problem.

Microsoft expanded Fabric IQ — its semantic intelligence layer — to be accessible via MCP to any agent from any vendor. ThoughtSpot launched Spotter for Industries, domain-specific agents pre-loaded with sector context. Workday's Sana went worldwide: AI-powered knowledge discovery and work automation built directly into its enterprise management platform. And Valiantys formalized a partnership with Glean to operationalize "Work AI" across enterprise systems of work.

These companies aren't in the same product category. They aren't competing for the same buyers in the same way. But they're all solving a variant of the same underlying problem: enterprise AI agents that operate from conflicting, incomplete, or outdated context about the business they're supposed to serve.

That's not coincidence. It's a signal about where the enterprise AI market is actually heading.

The problem they're all reacting to

For the past two years, enterprise AI adoption looked like a rollout problem. Companies deployed copilots and agents expecting productivity gains. Some got them. Many didn't — and the gap between the demos and the production results has been a running frustration for enterprise tech buyers.

The post-mortems keep pointing at the same culprit. Not the models. Not the integrations. The context.

Different agents built by different teams carry different definitions of what a customer is, what an order means, what a region contains. One agent queries a policy document that was updated last quarter. Another reads the version from 18 months ago. A third operates from structured data that disagrees with both. When those definitions diverge across a deployed set of agents, outputs break down in ways that look like hallucinations but aren't — they're context fragmentation. The model answered correctly based on what it was given. What it was given was wrong.

Microsoft's Fabric CTO Amir Netz described it with a film reference: "It's a little bit like the girl from 50 First Dates. Every morning they wake up and they forget everything and you have to explain it again" (VentureBeat). Every agent, every session, re-explained the business from scratch — or not, and left to its own interpretation.

That's the shared-context problem. As of this week, it has vendor attention across multiple segments of the enterprise software stack.

Why this is bigger than Microsoft

Microsoft is getting the most coverage because of scale. But the more interesting observation isn't what Microsoft is doing — it's that companies with completely different customer bases reached the same conclusion in the same week.

ThoughtSpot argues that generic AI hits a ceiling when it lacks sector-specific context. Their Spotter for Industries release pre-loads agents with industry terminology, regulations, and data patterns so that analytics outputs are "deterministic" — consistent and repeatable across queries (Business Insider). The explicit framing: enterprises are discovering the limits of general-purpose AI and want something literate in their specific domain.

Workday's Sana launch tells the same story from the HR and finance angle. Workday acquired Sana Labs for $1.1 billion in 2025 to build AI-powered knowledge discovery into its platform. The pitch: "AI only works in the enterprise when it's connected to trusted, deterministic systems" (SiliconANGLE). Not a chat interface layered on top of scattered documents — actual business context that produces repeatable results.

Glean's partnership with Valiantys extends the pattern into systems-of-work. The language from that release: enterprises need AI that is "secure, governed, and integrated into the systems that already run their business" (Business Insider). Work AI needs a context foundation before it can do actual work.

The diagnosis is the same across all four announcements. Four different vendors, four different enterprise segments, one week. That's a market acknowledging a shared problem.

The three layers

What these announcements collectively describe is a three-layer view of enterprise context that AI systems need to function reliably in production.

Structured semantic context — shared definitions of business entities: customer, order, product, region, policy. This is what Fabric IQ addresses. A business ontology accessible by any agent, regardless of vendor. Without this, two agents can both be technically "right" about a customer and still disagree on the answer, because they're using different definitions of what that customer record means.

Permissioned institutional knowledge — the organization's accumulated documentation: policies, procedures, product specifications, compliance requirements. This is where RAG lives. The problem with RAG alone is that it retrieves; it doesn't maintain. Pull an outdated policy document and the retrieved answer is confidently wrong. The agent isn't hallucinating — it's reading something that was accurate 14 months ago.

Live operational state — real-time signals from connected systems: current inventory, active tickets, a manager approval that came in at 9 AM. This is what Workday's Sana targets with its workflow automation layer. The difference between knowing the HR policy and knowing that this specific request is currently blocked.

Enterprises that are stalling on AI deployment usually have at least one of these three layers missing. In practice, the most common gap is the second one. Structured data has databases. Operational state has integrations. Documentation — the layer that codifies how the organization actually operates — is the one that gets updated ad hoc, falls out of sync, and fails silently.

The part of the stack that's still largely unmanaged

Semantic layers and real-time operational data have vendors now. Microsoft is investing heavily in the structured context problem. Workday and others are covering live operational state. The middle layer — permissioned institutional knowledge — remains the most labor-intensive and the least visible when it breaks.

Most enterprises have knowledge bases. Few have maintained ones. Policies get updated in one system and not another. Procedures get revised and the old version stays live. Two documents contradict each other and nobody knows which is current. Agents reading that knowledge base don't hallucinate — they retrieve accurately. They just retrieve something that stopped being true last quarter.

RAG is necessary but not sufficient. Retrieval solves the access problem. It doesn't solve the accuracy problem. In a multi-agent enterprise where five different systems query the same knowledge base, a stale document doesn't produce one wrong answer — it seeds the same error across the entire deployment. Scale makes accuracy failures worse, not better.

The companies building semantic layers are assuming the source documents beneath them are accurate. That assumption is almost universally wrong. What turns a RAG implementation into production-ready infrastructure is a governed document layer underneath: source attribution on every retrieved answer, contradiction detection across documents, scheduled audits, and a mechanism for updating content when the business changes. That's not a differentiation feature. It's what makes the semantic layer above it worth trusting.

The real enterprise AI moat isn't the model — it's a governed source of truth. And most enterprises don't have one yet.

At Mojar AI, the knowledge management agent does exactly this: it scans for contradictions across documents, handles natural-language updates to the knowledge base, investigates the source of incorrect answers when users flag them, and keeps the retrieval layer accurate as the business changes. That's the maintenance layer that turns a document repository into something agents can actually trust.

What to watch

Watch whether "system of context," "semantic layer," and "deterministic AI" language stabilizes into a category with a clear name and buyer evaluation criteria. Right now it's scattered across product announcements that don't reference each other. When analysts and procurement teams start grouping them together, the category will accelerate.

More specifically: whether enterprise agent platforms that are consolidating start treating knowledge governance as a prerequisite for production deployment — the way enterprise buyers now require SSO before signing a contract. Context quality as a procurement standard would move the entire knowledge management category from nice-to-have to infrastructure.

This week suggests the vendors are already there. The buyers may need another quarter or two.

Frequently Asked Questions

Enterprise AI agents built by different teams often carry conflicting definitions of the same business entities — what a customer is, what a region contains, what a current policy says. When those definitions diverge across a deployed set of agents, outputs break down in ways that look like hallucinations but trace back to fragmented context, not model failure.

Fabric IQ is Microsoft's semantic intelligence layer for enterprise AI. It provides a shared business ontology — definitions of business entities like customers, orders, and regions — accessible via MCP to any agent from any vendor. Making it MCP-accessible turns it from a Microsoft-only feature into shared infrastructure for multi-vendor agent deployments.

No. RAG handles document retrieval, but retrieval quality depends entirely on source accuracy. If the underlying documents are stale, contradictory, or ungoverned, agents retrieve wrong answers confidently. RAG needs a maintained, source-grounded knowledge base beneath it — with contradiction detection, attribution, and scheduled audits — to work reliably in production.

Structured semantic context (shared entity definitions across agents), permissioned institutional knowledge (governed documents: policies, procedures, specs), and live operational state (real-time signals from connected systems). Enterprises hitting a wall with AI usually have at least one of these missing. Most often, it's the middle one — the document layer — because it's the hardest to maintain and the least visible when it breaks.

Related Resources

  • →Enterprise AI Doesn't Have a Model Problem. It Has a Shared Reality Problem.
  • →Enterprise Agent Platforms Are Consolidating. The Knowledge Layer Is Becoming the Bottleneck.
  • →The Real Enterprise AI Moat Is a Governed Source of Truth
← Back to all posts