MCP Is Moving Up the Stack — and Exposing the Knowledge Problem Underneath
MCP is shifting from tool plumbing to a domain knowledge interface for enterprise agents. The protocol standardizes access. It does not standardize truth.
The conversation around Model Context Protocol has shifted fast. Six months ago, MCP was mostly talked about as plumbing — a cleaner way for agents to call tools, retrieve search results, run code. Useful, standardized, unglamorous.
That framing is becoming too narrow. A cluster of product launches and infrastructure moves suggests enterprises aren't just using MCP to connect agents to tools. They're using it to expose domain knowledge. Clinical drug databases. Proprietary research corpora. Governed enterprise records. The protocol is the same; what's being served through it is something different.
The distinction matters. Because a protocol that standardizes access to knowledge is not the same as a protocol that makes knowledge trustworthy.
From tool plumbing to knowledge interface
The clearest signal is FDB's MedProof MCP, which reached general availability in March 2026. FDB is the clinical drug intelligence company behind the medication databases baked into Epic, Oracle Health, and most major EHR platforms. MedProof MCP is an MCP server built specifically to ground AI agents in continuously updated, clinical-grade drug data — interactions, dosing, contraindications — without requiring developers to build custom integrations to legacy APIs.
The framing in the launch is worth noting. FDB isn't describing this as a convenience tool. It's described as a grounding mechanism. The product exists to solve a specific problem: AI agents that handle medication logic need to read from a source that is current, authoritative, and structured for clinical use. Artera, a patient communications platform serving more than 100 million patients, is already using it to expand AI agent capabilities across major EHR systems.
Healthcare vendors don't build regulated clinical products speculatively. The fact that a company like FDB piloted this in late 2025 and shipped GA in early 2026 means real customers tested it. That's a signal worth taking seriously.
Databricks makes a different but related move with AiChemy, a multi-agent system for drug discovery that uses MCP to combine external scientific knowledge sources — PubMed, PubChem, OpenTargets — with proprietary enterprise data on Databricks. The value isn't MCP as an abstraction layer. It's MCP as the interface through which two distinct knowledge worlds, public databases and internal research data, get queried together in a way where the agent's findings are traceable to verifiable sources.
The pattern runs through the builder community too. Developers are using MCP as the connective tissue for persistent memory systems, code graphs, and GraphRAG-style knowledge pipelines — architectures where the goal isn't just "can the agent reach a tool" but "can the agent reason over a structured knowledge context that I control."
Why this is bigger than tool calling
Standard tool-calling MCP usage looks like: agent needs to check the weather, call a search API, run a database query. The agent calls a tool, gets a result, continues reasoning. The knowledge is transient. Nobody is particularly worried about whether the weather API is internally consistent.
Domain knowledge MCP usage is structurally different. When FDB wraps its clinical database in an MCP server, the agent isn't calling a tool in the transient sense. It's reading from a knowledge source that is supposed to be authoritative — one that the agent and the humans overseeing it are expected to trust. The stakes of that knowledge being wrong, stale, or contradictory are measured in patient outcomes and regulatory exposure.
The Highflame and Tailscale partnership illustrates how fast the security layer is following. Their integration routes AI agent traffic — including MCP interactions — through Tailscale's Aperture network gateway for real-time security evaluation, capturing prompts, tool usage, and model outputs without requiring changes to agent code. Security vendors only build production infrastructure around protocols that enterprises are using in production. This kind of tooling doesn't exist for proof-of-concept deployments.
On the enterprise infrastructure side, the push toward formalized MCP registries is a further signal. Teams building registries aren't trying to make it easier to call more APIs. They're trying to govern which knowledge sources agents are allowed to access, under what conditions, with what audit trail.
What the stack shift actually means
The shift from tool plumbing to knowledge interface changes what's hard about running agents in production.
Tool plumbing problems are integration problems. Does the agent know how to call the right endpoint? Does it handle error codes correctly? These are solvable with good engineering.
Knowledge interface problems are governance problems. Is the knowledge the agent is reading current? Does it contradict anything else in the corpus? Who has permission to access which parts of it? When the agent makes a decision based on what it read, is that decision auditable?
Those aren't the same category of problem. And the protocol layer doesn't solve the governance layer. As noted in our earlier analysis of the real trust challenge facing MCP, MCP can make enterprise knowledge reachable without making it reliable. The protocol doesn't know whether the policy document it's surfacing was updated last week or eighteen months ago. It doesn't know whether one document's guidance contradicts another's. It doesn't know whether the employee querying through the agent is authorized to see that information.
A well-designed MCP server for a specific domain — like FDB's clinical drug database, with its continuous update cycle, its sourcing from primary clinical literature, its regulatory-grade data governance — handles a lot of these problems at the source level. That's exactly why it's valuable.
Most enterprise MCP deployments aren't working with purpose-built, continuously maintained, clinically governed data. They're working with SharePoint folders, Confluence wikis, and document collections that haven't been systematically audited in years. The protocol can reach those documents. That doesn't make those documents trustworthy.
The knowledge layer is where this gets decided
The practical consequence for enterprise AI teams is straightforward: the adoption of MCP as a standard agent interface raises the stakes for whatever is sitting underneath it.
If MCP becomes the default way agents access enterprise knowledge, then the quality of that knowledge — its freshness, its internal consistency, its source attribution, its permission controls — becomes a production concern, not a content management concern. An agent that reads confidently from a governed knowledge source and an agent that reads confidently from a stale, contradictory one look identical to the people deploying them. The difference shows up in the answers.
This is where context engineering intersects with knowledge governance. Teams that are thinking carefully about what their agents see are already realizing that the model is usually not the bottleneck. The bottleneck is the state of the knowledge being fed into the model.
For enterprises building on top of RAG platforms like Mojar AI, MCP adoption actually increases the value of a well-maintained knowledge base. When agents access knowledge through a standard interface, the differentiation shifts underneath the protocol: which organizations have knowledge that is current, consistent, auditable, and safe to act on. Source attribution that traces every answer to a specific document. Contradiction detection that flags conflicts before they surface as wrong agent actions. Permission-aware retrieval that respects who should see what.
The MCP layer makes all of this reachable. It doesn't supply any of it.
What to watch
The next wave of domain-specific MCP launches will likely concentrate in healthcare, legal, finance, and industrial operations — verticals where the cost of acting on bad knowledge is high enough that grounding becomes a product category, not a nice-to-have.
Enterprise MCP registries will evolve from access control lists into governance tooling. The interesting question isn't "which tools can agents call" but "which knowledge sources can agents read, under what conditions, with what auditability."
Security coverage of MCP interactions will deepen. Highflame and Tailscale are early. More vendors will build observability and policy enforcement at the MCP layer, because that's where the access surface now lives.
And there will be a widening gap between organizations that expose raw document repositories through MCP and organizations that expose governed knowledge interfaces. Both approaches look the same at the protocol level. The gap shows up in production, in audit logs, and in the quality of decisions agents make on the organization's behalf.
MCP is becoming the standard interface for enterprise agent knowledge. That's a real shift. What it doesn't change is whether the knowledge on the other side of that interface is worth trusting.
The protocol race is about interoperability. The enterprise race will be won by whoever can expose knowledge that is current, consistent, and auditable enough for agents to act on safely.
Frequently Asked Questions
Early MCP adoption treated the protocol as a way for AI agents to call external tools—APIs, databases, search engines. The new pattern packages MCP as the interface through which enterprises expose specific domain knowledge: clinical drug data, proprietary research corpora, governed enterprise records. The protocol stays the same; what's being exposed changes significantly.
No. MCP standardizes how agents request and receive context. It says nothing about whether that context is current, internally consistent, or source-attributed. An agent that receives stale or contradictory documentation through a polished MCP interface will still act on stale and contradictory documentation.
Healthcare vendors don't ship clinical grounding products speculatively. FDB's MedProof MCP—purpose-built to ground AI agents in continuously updated medication intelligence—reached general availability in March 2026 after a late-2025 pilot. That timeline means real enterprise customers tested and validated it. When regulated industries move this fast, the pattern is real.