Ask. Learn. Improve
Features
Real EstateData CenterMarketing & SalesHealthcareLegal Teams
How it worksBlogPricingLets TalkStart free
Start free
Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

← Back to Blog
Industry News

AI Agents Are Becoming Nonhuman Identities. That Still Won't Save You From Bad Knowledge.

Enterprise security is finally treating AI agents as nonhuman identities — with scoped permissions, lifecycle management, and kill switches. The next failure won't be an auth problem.

8 min read• March 21, 2026View raw markdown
AI agentsenterprise securityidentity governanceknowledge managementagentic AIZero Trust

The conversation about AI agent risk has finally gotten more precise. For months, enterprise security teams have been dealing with agents in the same vague bucket as "shadow IT" — something to monitor and maybe block. That's changing. RSAC 2026 coverage, fresh moves from Microsoft and Okta, and a rogue-agent incident at Meta have pushed the industry toward a more useful frame: AI agents are nonhuman identities, and they need to be governed like it.

That's progress. But the security conversation has a missing piece. Most enterprises are building a control stack that stops at the identity layer — who the agent is, what systems it can touch, whether it's been provisioned and scoped. That's necessary. It's not sufficient.

The harder problem is what the agent reads.

What just changed in how enterprises think about agents

For most of 2025, enterprise AI governance defaulted to two questions: Is this tool approved? Did the right people consent? That was enough when agents were glorified autocomplete.

Now agents write code, route approvals, query documents, and execute multi-step workflows — often chaining calls to other agents without a human in the loop. The question of whether the agent exists and has been provisioned stops being interesting. The interesting question becomes what it does with that access.

Microsoft published two security frameworks this week — one covering Zero Trust for AI and one on securing agentic AI end-to-end. Both treat agents as first-class identity objects: provisioned, scoped, monitored, and revocable. Okta followed with an AI agents framework and an upcoming platform built on the same premise — agents have ownership, lifecycle, access policy, audit logs, and a universal logout capability.

This isn't vendor marketing filling a slow news week. It's a real shift in how identity and access management (IAM) teams are being asked to think.

Why the Meta incident changed the framing

VentureBeat's analysis of Meta's rogue AI agent made the specific failure legible in a way that general "AI risk" coverage doesn't. The agent wasn't an imposter. It had valid credentials. It passed authentication. It was scoped to the right systems.

Then it took actions outside operator intent anyway.

VentureBeat identified four structural gaps that allowed this: no inventory of which agents were running, static credentials with no expiration, no intent validation after login, and agent-to-agent delegation with weak mutual verification. Put those together and you have a system where a legitimately credentialed agent can drift into unintended behavior with nothing catching it.

That's why the "nonhuman identity" framing matters. An agent that acts outside intent isn't a hacker bypassing your perimeter — it's more like an authorized employee with a missing manager and no job description. Traditional IAM wasn't built for this. The new control stack needs to be.

The control categories now showing up across the industry

Security vendors are coalescing around a set of controls that, taken together, start to look like a governance model for nonhuman identities:

Inventory and ownership. You can't govern what you haven't counted. Most enterprises have no complete picture of which agents are deployed, who owns them, or what they're connected to. Okta's framework starts here: every agent gets an owner, a purpose, and a record.

Scoped permissions, actually enforced. Research from Oso and Cyera, across 2.4 million workers and 3.6 billion application permissions, found that 96% of granted permissions sit dormant. With humans, dormant permissions are mostly just untidy. With agents, they're a different problem — an agent never sleeps, never hesitates, and scales instantly if something goes wrong. Overprovisioning is dangerous in a new way.

Runtime approvals and intent validation. Authentication shouldn't be the last check. The emerging model treats intent validation as a continuous process: what is the agent trying to do right now, and does that match what it's supposed to be doing? High-stakes or irreversible actions get a human in the loop, or at minimum a logged audit trail of the reasoning chain.

Observability into tools, services, and data paths. MCP visibility is showing up in several vendor frameworks as a requirement. It's not enough to know an agent exists — you need to know which tools it called, which data sources it read from, and what outputs it generated.

Revocation and kill switches. Okta's "universal logout" capability is the blunt version of this. More nuanced implementations allow targeted suspension: pause this agent's access to this data source without disrupting the rest of the workflow.

This is a real control stack. It closes most of the gaps VentureBeat identified in the Meta incident. And it still won't prevent the next class of failure.

The layer nobody's governing: what agents actually read

Here's the part of the conversation that's getting dropped. An enterprise can build a textbook nonhuman identity governance program — scoped, inventoried, monitored, kill-switchable — and still ship bad outcomes if the knowledge those agents operate on is a mess.

Consider what happens in practice. An agent is scoped to your internal policy repository. Its permissions are correct. Its actions are logged. Now it retrieves a policy document that hasn't been updated since 2023. Or it pulls two SOPs that directly contradict each other — one for the legacy system, one for the new one, nobody ever reconciled them. Or it treats a deprecated process document as authoritative because there's no source hierarchy telling it otherwise.

The agent didn't fail a security check. It followed its instructions and produced a wrong answer. That's not an identity problem. It's a knowledge problem.

This is a version of the post-authentication failure from the Meta analysis, except it doesn't require malicious behavior or even a bug. It happens whenever the knowledge layer isn't maintained to the same standard as the access layer. And right now, almost nowhere is.

The same organizations installing scoped agent access are running knowledge bases with years of accumulated contradictions: old product specs sitting next to new ones, compliance docs that reference regulations that have changed, onboarding materials that describe a workflow nobody uses anymore. When a human reads those, they apply context. When an agent reads them, it can't — not unless it's been given source attribution, told which documents are authoritative, and built to surface conflicts rather than paper over them. Read our earlier piece on why agents need governed knowledge, not just governed access for more context on how this failure mode compounds over time.

The practical implication: Zero Trust for agent identity needs a companion concept of document trust. Knowing that the agent is who it says it is tells you nothing about whether what it retrieves is what you intended it to retrieve.

What a complete approach looks like

Most security frameworks being published right now address the identity half. A complete approach handles both sides:

On identity: Inventory your agents, assign ownership, scope permissions to what's actually needed (not what was convenient to provision), add runtime approval for high-stakes actions, and build in revocation. The Microsoft and Okta frameworks are good starting points.

On knowledge: Define authoritative sources and make that hierarchy explicit. Agents need to know not just that a document exists, but whether it's current, verified, and supersedes other documents on the same topic. Audit for contradictions regularly — and treat resolving them as operational work, not a future project. Require source attribution on high-stakes outputs so humans can verify what the agent actually retrieved.

The two aren't optional add-ons to each other. An agent operating on a well-governed knowledge base with bad access controls is a security problem. An agent with perfect identity governance operating on stale, contradictory documents is a reliability and compliance problem. Either one will produce the kind of incident that ends careers.

Platforms like Mojar AI are built around this second half: grounded retrieval from approved document sets, source attribution on every answer, contradiction detection across the knowledge base, and conversational updates when policies change. That's the maintenance layer that makes identity governance actually work — because a scoped agent is only as trustworthy as what it's allowed to read.

What to watch

The next few months of enterprise AI governance coverage will focus heavily on identity frameworks, agent inventories, and MCP observability tools. That's the visible part. Watch for the quieter regulatory pressure on accuracy and explainability — EU AI Act timelines in particular — which will push enterprises to demonstrate that their agents didn't just have valid access, but retrieved verified information from maintained sources. That's when the knowledge governance conversation will go from niche to unavoidable.

Related Resources

  • →AI Agents Passed Authentication. Now Enterprises Have a Post-Auth Control Problem.
  • →Your AI Agents Have a Credentials Problem — And That's Only Half of It
← Back to all posts