Ask. Learn. Improve
Features
Real EstateData CenterMarketing & SalesHealthcareLegal Teams
How it worksBlogPricing
LoginGet a demo
LoginGet a demo

Product

  • AI Agents
  • Workflows
  • Knowledge Base
  • Analytics
  • Integrations
  • Pricing

Solutions

  • Healthcare
  • Legal Teams
  • Real Estate
  • Marketing and Sales
  • Data Centers

Resources

  • Blog

Company

  • About
  • Contact
  • Privacy Policy
  • Terms of Service

©2026. Mojar. All rights reserved.

Built by Overseek.net

Free Trial with No Credit Card Needed. Some features limited or blocked.

©2026. Mojar. All rights reserved.

Built by Overseek.net

Free Trial with No Credit Card Needed. Some features limited or blocked.

← Back to Blog
Industry News

Enterprise AI Doesn't Just Need Agents. It Needs a Control Plane for Them.

Sycamore's $65M seed round is the clearest signal yet that enterprise agent governance is hardening into its own infrastructure category — and it's still missing a layer.

6 min read• April 3, 2026View raw markdown
AI AgentsEnterprise AIAgent GovernanceControl PlaneKnowledge GovernanceAgentic AI

When a $65M seed round signals a category shift

On March 30, Sycamore Labs announced a $65 million seed round to build what it calls a "trusted agent operating system" for the enterprise, according to SiliconANGLE. The backers include Coatue and Lightspeed Venture Partners, with angels from Databricks, OpenAI, and Palo Alto Networks. That's not a niche bet. That's the enterprise AI establishment putting serious money behind one thesis: the agent stack needs governance infrastructure, and nobody's built it yet.

The timing is hard to ignore. This landed the same week Kilo launched around the shadow AI agents problem. Microsoft, Cisco, Laminar, Monte Carlo, and Singulr AI are all shipping products with similar vocabulary: governance, observability, policy enforcement, trust. The exact words vary. The underlying pattern is the same.

When multiple well-funded teams converge on the same problem in the same short window, it usually means the problem just became undeniable.

Infrastructure formation, not feature expansion

The agent narrative spent 2024 on capability — what can they do, how far can they reason. In 2025, the story shifted to deployment: agents landing in real workflows, touching real systems.

Now enterprises are dealing with the operational fallout. They have agents handling procurement lookups, IT ticket triage, contract review, customer responses. Some went in with proper oversight. A lot didn't. More are running in corners of the org that IT has never audited.

Shadow AI agents aren't a theoretical concern anymore. They're accumulating.

That's what makes this a real category formation rather than a feature race. Enterprise IT could afford to ignore AI demos. It cannot ignore agents that are autonomously touching Salesforce, filing documents, and emailing customers. Those require inventory, approval controls, policy enforcement, and an audit trail. That's not an AI feature. That's infrastructure.

What's actually changing in the stack

The enterprise AI stack has always had two obvious layers: agents and automations on top, doing the work; underlying systems at the bottom — documents, APIs, databases, identity directories — providing data and access.

A new middle layer is forming: discovery (what agents exist and what can they reach?), policy enforcement (what are they permitted to do?), trust escalation (how much autonomy have they earned?), observability (what did they actually do?), and audit trails that hold up in a compliance review.

Sycamore's approach — a tiered trust model where agents earn autonomy by demonstrating reliable behavior under monitoring — is a reasonable first take. Enterprises can't grant full autonomy to every newly-deployed agent on day one. Some mechanism for progressive trust makes sense, practically speaking.

The MCP ecosystem is sharpening this urgency. Once agents connect to external servers, APIs, and tools through standardized protocols, the callable surface area expands quickly. An agent that previously read internal documents can now, in principle, send emails, query customer records, and file support tickets. Governance that was optional for read-only agents becomes necessary for agents that can act.

Not AI security rebranded

Vendors are reaching for "Zero Trust for AI agents" as convenient shorthand. The framing isn't entirely wrong, but it undersells how different the problem is.

Classical security asks: is this entity who it claims to be, and does it have valid credentials? Enterprise agent governance asks something more operational: given that this agent has approved credentials and scope, are the actions it's taking consistent with company policy, and can we reconstruct every step after the fact?

Different question. Authentication is table stakes. The hard part isn't getting agents credentialed — it's constraining what they do once they're in, and maintaining evidence that they stayed within bounds.

Lumping governance into "AI security" will produce bad purchasing decisions. Security vendors will add governance features; governance vendors will add security integrations. The operational logic differs, the buyers have different concerns, and the tooling will diverge. Treating them as the same category now is a mistake enterprises will clean up later.

The layer the governance stack is still missing

Here's where the story gets uncomfortable.

A well-governed agent is one whose permissions you understand, whose actions you can audit, and whose scope you can constrain. That's the control plane. It's genuinely necessary.

It doesn't tell you whether what the agent acted on was actually true.

An agent can carry a perfect audit log and still file the wrong insurance code because the procedure guideline it retrieved was superseded six months ago. It can send a customer the wrong return policy because two documents in the knowledge base contradict each other and the retrieval system picked the older one. You can govern an agent's permissions down to the action level and still get bad outcomes — just bad outcomes with an audit trail attached.

This is the blind spot most governance vendors aren't touching: policy enforcement and observability sit above the quality of the knowledge layer. We covered the observability version of this gap last week — execution traces tell you what an agent did, not whether the information it used was right.

The enterprise agent stack needs both layers. A control plane for what agents can do. A governed knowledge layer for what information they act on. One without the other is a half-solution — acceptable for demos, not for anything consequential.

Infrastructure always fragments into layers

It happened with databases, application servers, and API gateways. It will happen here too.

Agent governance is crystallizing into a distinct layer. The Sycamore round is the clearest single signal, but Kilo, Laminar, Singulr AI, and the enterprise plays from Microsoft and Cisco are all pointing the same direction. Whether any specific vendor takes the category is unknowable right now. That the category is forming is not.

The layer below it — knowledge governance, the infrastructure that determines whether what agents retrieve is accurate, attributed, and current — is still largely unaddressed. Most enterprises haven't thought about it seriously, or assumed existing document storage handles it.

It doesn't.

Control plane plus governed knowledge layer. Enterprises building both will have agents they can actually trust. Enterprises building only one will keep wondering why the audit log isn't fixing anything.

Related Resources

  • →The AI Agent Governance Blind Spot: Knowledge Accuracy
  • →Agent Observability Is Becoming Infrastructure. Traces Alone Still Won't Fix Bad Knowledge
  • →AI Agents Passed Authentication. Now Enterprises Have a Post-Auth Control Problem.
← Back to all posts