Ask. Learn. Improve
Features
Real EstateData CenterMarketing & SalesHealthcareLegal Teams
How it worksBlogPricingLets TalkStart free
Start free
Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

← Back to Blog
Industry News

Enterprise AI Agents Are Scaling Faster Than Governance — And Security Incidents Are Only Half the Story

88% of enterprises have already had an AI agent security incident. But the failures nobody measures — agents acting on stale, contradictory documents — don't show up in any incident log.

6 min read• March 18, 2026View raw markdown
AI AgentsEnterprise AIAI GovernanceKnowledge ManagementRAGAI Security

88% is not a rounding error

88% of organizations confirmed or suspected an AI agent security incident in the past twelve months, according to a Gravitee report released this week. In healthcare, the number is 92.7%.

These are not projections about what might happen when AI agents scale. This is what happened last year, in organizations that already deployed them.

The security press is covering this as a monitoring and identity problem. That framing is not wrong. But it's incomplete. The visible incidents represent one failure mode. There's a second one that generates no alerts, leaves no trace in the incident log, and may be running in your environment right now.

The deployment picture

Start with the scale. Gravitee found 3 million AI agents deployed across large US and UK firms. Nearly 1.5 million run without active monitoring or security controls. Only 14.4% went live with full security and IT approval before deployment.

The math behind that last number deserves attention. If fewer than one in seven agents cleared a proper governance process before going live, the other six-plus were deployed by someone who either didn't know the process existed, didn't think it applied, or decided speed mattered more. Probably all three, depending on the team.

DataDome observed 7.9 billion AI agent requests in January and February 2026 combined. Two months. Whatever assumptions your organization was making about AI agent usage being experimental or limited, those assumptions are outdated.

And deployment is still accelerating. Alibaba launched an agentic AI tool for businesses with Slack and Teams integration this week. Baidu announced its own enterprise agent push. Nvidia launched an enterprise AI agent platform with Adobe, Salesforce, and SAP embedded. The 3 million figure is going in one direction.

The failure mode the reports aren't measuring

Every security-focused response to this data covers the same ground: identity verification, access controls, real-time monitoring, zero-trust architecture. That work matters. The threat of rogue agents coordinating to exfiltrate data or identity spoofing at the agent layer is real.

But those threats have something in common: they're detectable. A compromised agent behaves abnormally. A spoofed identity triggers anomalies. Monitoring catches these because the signal is there to catch.

The knowledge failure is different. An agent that queries stale or contradictory documents behaves exactly as designed — it retrieves content and generates a response. No anomaly. No alert. Just a confident, well-formatted answer based on information that stopped being accurate months ago.

We've covered the governance blind spot this creates before: the access controls organizations are building around their agents say nothing about the quality of what those agents are actually reading. And when Microsoft announced it was managing 82 AI agents per employee, the coverage focused entirely on the governance structure — not once on what those agents knew.

What an invisible incident actually looks like

An HR agent answers employee questions about leave policy. The policy changed in January. Nobody updated the document. For two months, the agent tells employees they can carry over 10 days of leave when the actual limit dropped to 5. No security incident. No alert. Just wrong expectations building up across the workforce until HR starts getting complaints they can't explain.

A sales agent embedded in the CRM helps reps respond to pricing questions. Pricing was updated in Q4. The old sheet is still in the knowledge base alongside the new one. The agent sometimes pulls from the outdated version, whichever chunk scores highest for that query. Deals close at the wrong margin. Finance notices the discrepancy weeks later.

A clinical decision support tool references a drug interaction protocol that was superseded when new guidance came out. Providers use it in good faith. No security event. Just outdated information flowing through a clinical workflow.

None of these trip the monitoring systems. None show up in an incident report. They're not detectable as incidents at all. They look like normal agent operation, right up until someone notices the damage downstream.

This isn't a theoretical risk category. The documented pattern of agentic AI failures tied to document and knowledge quality shows this is already happening in deployed systems. The security incident numbers are high because visibility is low. The knowledge incident numbers are unknown — because nobody's measuring them.

What governance actually requires

Security governance for AI agents is about controlling what agents can do and who can instruct them. That's necessary. Knowledge governance is about controlling what agents know, and keeping that knowledge accurate as the underlying reality changes.

These are different problems that require different infrastructure.

Security governance asks: does this agent have appropriate permissions? Is its identity verifiable? Is its behavior consistent with its role?

Knowledge governance asks: when did this document last get verified? Does it contradict anything else in the knowledge base? If a policy changed last week, was the old version removed or replaced? When an agent gives a wrong answer and a user flags it, does that feedback reach the source documents?

At Mojar AI, the knowledge management layer is built specifically for that second set of questions. The platform scans for contradictions across uploaded documents, flags content that may be outdated, and supports natural-language updates to the knowledge base itself. When agents generate feedback through user corrections or negative signals, that feedback can trigger source document review rather than getting lost. The knowledge base becomes something you can actually maintain, not just something you upload once and hope stays accurate.

The technical term for this is retrieval-augmented generation with active knowledge management. The practical translation is: your agents are only as good as what they're reading. If you don't maintain what they read, you don't control what they say.

The takeaway

The 88% incident figure will drive a lot of security investment over the coming months. Good. Identity management, monitoring, and access control for AI agents are underbuilt and the numbers prove it.

But the organizations treating agent governance as a purely security problem are only closing half the risk surface. Their agents are querying documents that haven't been audited, contradictions that haven't been resolved, and policies that haven't been updated since the agents were deployed.

That risk is running right now. At 7.9 billion requests over two months. Across 3 million deployed agents. Producing answers that look correct, sound confident, and may be completely wrong.

Security alerts will tell you when your agents misbehave. Nothing will tell you when your documents do — unless you build something specifically to catch it.

← Back to all posts