Ask. Learn. Improve
Features
Real EstateData CenterMarketing & SalesHealthcareLegal Teams
How it worksBlogPricing
LoginGet a demo
LoginGet a demo

Product

  • AI Agents
  • Workflows
  • Knowledge Base
  • Analytics
  • Integrations
  • Pricing

Solutions

  • Healthcare
  • Legal Teams
  • Real Estate
  • Marketing and Sales
  • Data Centers

Resources

  • Blog

Company

  • About
  • Contact
  • Privacy Policy
  • Terms of Service

©2026. Mojar. All rights reserved.

Built by Overseek.net

Free Trial with No Credit Card Needed. Some features limited or blocked.

©2026. Mojar. All rights reserved.

Built by Overseek.net

Free Trial with No Credit Card Needed. Some features limited or blocked.

← Back to Blog
Industry News

The MCP Layer Has Officially Become an Enterprise Attack Surface

Three major security vendors just shipped MCP-specific scanners. The findings are real, the vulnerabilities are documented, and the enterprise AI stack is changing.

6 min read• April 2, 2026View raw markdown
MCPAI agentsenterprise securityagentic AIknowledge governance

What just changed

Six months ago, MCP security was a blog-post worry. People speculated about prompt injection risks, debated whether tool descriptions could be weaponized, wrote speculative threat models. The general vibe was: this could become a problem.

It's a problem now.

In recent weeks, three separate organizations shipped production-ready scanning tools explicitly targeting MCP servers. Sigil published a static source code analyzer that audited 73 of the most-installed open-source MCP servers on Smithery and found 2 failures with real security-relevant patterns and 5 configuration warnings. Cisco released a dedicated MCP Scanner combining YARA rules, LLM-as-judge evaluation, and the Cisco AI Defense API. Snyk expanded its Agent Scan to cover MCP servers, tools, prompts, resources, and skills as a unified supply-chain surface. When three significant security vendors converge on the same protocol in the same quarter, a category is forming.

Categories don't get scanners unless the risk is structural

Security tooling is not cheap to build or maintain. Nobody ships a dedicated scanner for a theoretical concern. The investment Cisco, Snyk, and Sigil put into MCP-specific tooling reflects a shared read of the threat landscape: this protocol is a real attack surface, it is in production at real enterprises, and existing tooling was not built for it.

That last part is the issue. Existing app security tools scan web APIs and containerized services. They were not built for a protocol where a local subprocess inherits your full user permissions and can read SSH keys, execute shell commands, and write to disk. MCP servers are a different kind of target. Now they have their own scanner category.

The Sigil README makes the gap concrete: studies of MCP implementations found 34% using APIs prone to command injection, 82% using file operations prone to path traversal, and 5.5% with active tool poisoning in their descriptions (Sigil, GitHub). The Smithery audit surfaced specific cases: telegram-mcp was flagged for unrestricted file-path-based file sending; mcp-sqlite-server showed risky SQL execution patterns plus exposed HTTP debug conditions. Sixteen rule classes across seven categories — injection, permissions, data exfiltration, input validation, tool description integrity, authentication, configuration. These are not hypothetical. They're patterns that show up the first time someone actually reads the code with a scanner.

Why MCP changes the threat model

MCP is easy to misunderstand as a routing layer or metadata protocol. It's the bridge between a model and real-world systems. When an agent calls an MCP server, it can access files, query databases, execute commands, send messages, and interact with external APIs. Whatever permissions the host process has, the server inherits. There is no sandbox.

This is what makes the attack surface different from a typical SaaS integration. A weak REST API might leak data. A weak MCP server can expose your filesystem, your secrets manager, your internal network, and every process the user account can touch. Cisco's scanner uses three analysis engines precisely because the threat surface is that wide: YARA for known malicious patterns, LLM evaluation for semantically suspicious behavior, and the AI Defense API for supply-chain risk (Cisco AI Defense, GitHub).

Snyk's framing is similar. They now treat MCP servers, tools, prompts, and resources as one continuous supply chain — the same way they treat npm packages or container images (Snyk Agent Scan, GitHub). That's the right model. The MCP server ecosystem circa early 2026 looks a lot like npm circa 2018: fast-growing, loosely vetted, high trust by default. We know how that went.

The architecture question the security coverage is skipping

Most MCP security coverage will focus on what to patch, which servers to avoid, how to harden your stack. That is the correct short-term answer. But enterprises should also ask what the pressure toward MCP hardening implies for how agents should be accessing knowledge in the first place.

MCP's security problems come partly from its design ambition. It was built to give agents broad, flexible access to tools and systems — wide permissions by default. The security push underway is the market correcting for that. The correction points toward a different architecture: scoped access, auditable retrieval, governed context instead of ad hoc tool sprawl.

The enterprise question is not only "how do we harden our MCP servers?" It's "what should our agents actually be reading, and can we prove it?" For knowledge-heavy workflows, the answer is a retrieval layer with permission-aware access, source attribution, and documented provenance. You can harden the connectors, and you should. But agents still need clean, current, auditable context to act on. A hardened MCP server pointing at stale, contradictory documents doesn't give you much.

This is where platforms like Mojar AI sit in the conversation — not as MCP security vendors, but as the answer to the downstream question MCP security is forcing. When enterprises tighten agent permissions and ask what their agents should be accessing, governed knowledge retrieval with source attribution and active accuracy maintenance is one of the few clean answers. It reduces reliance on broad tool permissions. It makes what the agent read auditable. It keeps the knowledge current so the scoped access is actually useful.

We've written before about why tool trust is becoming an enterprise infrastructure problem and about how MCP registries are shaping the agentic security control plane. The scanner wave is the next chapter in that arc: the audit mechanisms now exist, the findings are real, and enterprises have to decide how to respond.

The shift is structural, not a patch cycle

The AI agent stack is growing up through an uncomfortable phase. "Connect everything" worked when agents were mostly demos. In production, at enterprise scale, it's a security liability. The tooling now exists to audit the MCP layer. The findings confirm the risk. The hardening guidance is being written.

The teams that get through this well are the ones who treat it as a design signal rather than a patching exercise. Build tighter access patterns. Prefer scoped, auditable retrieval over broad permissions. Know what your agents read and why.

"Connect everything" is giving way to "govern everything." The scanners just made that official.

Related Resources

  • →The Real MCP Problem Isn't More Tools — It's Whether You Can Trust Them
  • →Enterprise MCP Registries and the Agentic Security Control Plane
← Back to all posts