Ask. Learn. Improve
Features
Real EstateData CenterMarketing & SalesHealthcareLegal Teams
How it worksBlogPricingLets TalkStart free
Start free
Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

← Back to Blog
Industry News

Your AI Agents Have a Credentials Problem — And That's Only Half of It

80% of Fortune 500 companies use AI agents. 1 in 3 are unsanctioned. Enterprise security is focused on agent identity. Nobody's talking about what those agents actually know.

5 min read• March 10, 2026View raw markdown
AI AgentsEnterprise AIAI GovernanceKnowledge ManagementSecurity

Microsoft released a number this week that every enterprise IT team should sit with: 80% of Fortune 500 companies already use AI agents. Nearly 1 in 3 of those agents are unsanctioned — running without IT approval, without governance controls, without audit trails. The industry response has been swift: get control of agent identity. Authenticate them. Authorize them. Track what they're accessing. That is the right response. It is not the complete one.

Everyone Is Asking Who the Agent Is

Microsoft's answer to the governance gap is Agent 365, a $15/user/month control plane that gives IT teams visibility into agent activity and tools to govern agent identity across the enterprise. It launched this week alongside Microsoft 365 E7, and the timing tracks: agents have moved from experimental to operational, and the monitoring infrastructure hasn't kept up.

Vasu Jakkal, CVP at Microsoft Security, put it plainly in an interview with VentureBeat: "We're seeing them deeply embedded in organizations, in the operational structure. At the same time, some organizations have a visibility gap, and that visibility gap creates business risk."

That visibility gap is real. NIST's Zero Trust Architecture (SP 800-207) explicitly requires all entities — including non-human software — to be authenticated and authorized separately. Most enterprises haven't done this for agents. The urgency is legitimate.

Nancy Wang, CTO at 1Password, described the problem clearly in VentureBeat's identity coverage: "An AI agent is not a user you can train or periodically review. It is software that can be copied, forked, scaled horizontally, and left running in tight execution loops across multiple systems. If we continue to treat agents like humans or static service accounts, we lose the ability to clearly represent who they are acting for, what authority they hold, and how long that authority should last."

She's right. The identity problem is serious and underaddressed.

But there's a second question that nobody in this conversation is asking.

Nobody Is Asking What the Agent Knows

Here's the failure mode that gets skipped in every governance discussion: an AI agent with perfect credentials, operating on a knowledge base full of outdated policies and contradictory documents, still produces wrong decisions. You've solved who the agent is. You haven't solved what the agent believes.

Enterprise document repositories weren't built with AI agents in mind. They were built by humans, for humans, over years. Inside them: compliance policies that were updated in one department but not another. Pricing guides from three product cycles ago that nobody deleted. SOPs that conflict across two documents that were both edited in the same quarter. Safety procedures from before a regulatory change.

A human employee navigates that mess with judgment. They check the date on a document. They ask a colleague. They know which version the team actually uses. An AI agent doesn't have that context. It retrieves what's there. If three documents give three different answers, the agent synthesizes them, picks the most confident-sounding one, or averages them. None of those paths leads somewhere good.

The result is autonomous decisions at machine speed, running on document debt that accumulated at human speed, a failure mode that now carries direct federal compliance risks under the FTC.

When Both Problems Appear Together

This isn't a theoretical concern. The compound failure mode — unsanctioned identity + ungoverned knowledge — has already shown up in the wild.

Amazon's AI outage in late 2025 was a visible version of the knowledge problem: an AI agent making autonomous decisions without the institutional knowledge to understand that those decisions were wrong. The agent had access. It lacked grounding. The gap between those two things was the failure.

Microsoft's "double agents" framing, first introduced by security executive Charlie Bell in November 2025, focuses on agents being hijacked or manipulated by adversaries. That's a real threat. But the mundane version — an authorized agent confidently acting on stale information — is more common and arguably harder to catch. A security audit finds the hijacked agent. The well-behaved agent executing wrong decisions looks fine in the logs.

What Actually Solves This

The enterprises deploying AI agents responsibly aren't just managing identity. They're managing the knowledge substrate those agents operate on, which is exactly why the lack of a knowledge foundation is the number one reason enterprise AI pilots fail to reach production.

That means running continuous audits on document repositories to catch contradictions before an agent encounters them. It means flagging outdated files rather than leaving them silently in the corpus. It means treating negative agent feedback as a signal to investigate and correct source documents — not just a UI metric.

When agents make a wrong call, the question "who authorized this agent?" is only one part of the investigation. The other question is "what did the agent retrieve, and is that information accurate?" An audit trail of agent actions is only useful if the actions were based on correct information in the first place. As we noted when looking at the enterprise AI security stack, accuracy is the missing Layer 4 that nobody is building.

This is the layer companies like Mojar AI are building — not the agent identity layer, but the knowledge governance layer that agents reason on. Contradiction detection across documents. Feedback-driven correction of source material. Knowledge bases that stay current rather than accumulating debt. The infrastructure that makes "the agent said so" a reliable claim.

The Conversation That Needs to Happen Next

Microsoft's Agent 365 is a good product for a real problem. Fortune 500 IT teams should take agent identity seriously and use tools that give them visibility.

They should also ask the second question: what is their agent operating on?

A company with clean agent governance and a broken knowledge base has secured the front door while leaving the foundation cracked. Both problems need solving. The security community is having one of those conversations right now. The other one hasn't started yet.

Frequently Asked Questions

AI agent governance is the set of controls that determine what AI agents can access, what actions they can take, and whether their outputs are accurate. It has two layers: identity governance (who the agent is, what systems it can reach) and knowledge governance (what information it retrieves and acts on). Most enterprise security frameworks address only the first layer.

Unsanctioned agents operate without IT approval, audit trails, or access controls. According to Microsoft's February 2026 Cyber Pulse report, 29% of enterprise AI agents run without security team sign-off. They can access sensitive systems, execute workflows, and make decisions with no visibility into what they're doing or why.

Knowledge governance ensures that the information an AI agent retrieves is current, accurate, and internally consistent. Without it, agents pull from document repositories full of outdated policies, conflicting procedures, and superseded guidance — and act on that information autonomously.

Related Resources

  • →88% of Enterprises Say They're AI-Ready. 61% Can't Ship Because Their Data Isn't Trusted.
  • →Enterprise AI Has Four Security Layers. Only Three Are Getting Built.
  • →Amazon's AI Outage Crisis Isn't an AI Problem — It's a Knowledge Problem
  • →After March 11, Your AI Chatbot's Wrong Answers Might Be a Federal Compliance Problem
← Back to all posts