Ask. Learn. Improve
Features
Real EstateData CenterMarketing & SalesHealthcareLegal Teams
How it worksBlogPricingLets TalkStart free
Start free
Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

← Back to Blog
Industry News

Perplexity Lost in Federal Court Because It Confused Two Things. Enterprise AI Makes the Same Mistake Daily.

A federal judge blocked Perplexity's Comet agent from Amazon on one legal finding: user consent and platform authorization are different requirements. Enterprise AI assumes they're the same.

5 min read• March 11, 2026View raw markdown
AI AgentsEnterprise AILegalAgentic AIKnowledge Management

Perplexity thought it had a simple argument: the user handed Comet their Amazon login. They consented. Therefore, Amazon has to accept the agent. A federal judge in San Francisco just rejected that reasoning outright.

The take

The legal concept Judge Maxine M. Chesney applied is specific and worth reading carefully: "Perplexity, through its Comet browser, accessed Amazon user accounts 'with the Amazon user's permission but without authorization by Amazon.'"

User permission. Without platform authorization. Two distinct requirements. Perplexity had one. It never had the other. That gap is now a federal injunction.

The court found Amazon "likely to succeed" on both CFAA (Computer Fraud and Abuse Act) and California computer fraud claims, and it did something unusual: it denied the bond requirement typically imposed on injunctions. Courts usually make the party seeking relief post bond as protection against wrongful rulings. Waiving it signals the judge found Amazon's position unusually strong. Perplexity has until March 16 to appeal to the 9th Circuit.

Every enterprise AI team should be reading this. Not because they're building shopping agents. Because the same two-test logic applies to virtually every AI agent running inside a corporate environment — and most enterprise deployments are skipping test two.

What happened

Amazon filed suit in November 2025 alleging Comet disguised itself as a standard Chrome browser and refused to identify itself as an AI agent (CNBC, Bloomberg). Amazon sent a cease-and-desist in October; Perplexity published a blog post titled "Bullying is not innovation." That post reads differently now.

Perplexity's core argument was that an agent "inherits the user's permissions." The court found that legally insufficient. The full CFAA question — whether the act applies to AI agents acting at user direction on third-party platforms — is still unsettled and won't be resolved until a full trial. But the preliminary injunction establishes the operating framework courts are applying: the two-question test.

  1. Did the user authorize the agent to act on their behalf?
  2. Did the platform authorize the agent to operate there?

Comet cleared the first bar. It failed the second. The court also ordered Perplexity to destroy any Amazon data collected via Comet during the period in question (GeekWire).

Amazon CEO Andy Jassy has said the company expects to work with third-party agents eventually, but "on its own terms." That's not a technology position. It's a statement about who controls the second authorization — and right now, Amazon's answer is Amazon.

What this means for enterprise AI teams

The implicit assumption running through most enterprise AI deployments is that user consent and system authorization are functionally the same thing. If an employee has access to a system and hands the agent their credentials, the agent should be treated like the employee. The Amazon/Perplexity ruling is the first major court decision to say that assumption is wrong.

Two scenarios playing out in enterprise deployments right now:

An internal AI agent browsing across HR, legal, or financial document systems does so because an employee authorized it. But "the platform" in enterprise terms — the HRIS, the contract management system, the finance database — almost certainly has not explicitly authorized an AI agent to operate inside it. The credentials passed. The authorization layer was skipped.

An AI agent integrated into Salesforce, ServiceNow, or any third-party SaaS via user credentials is in the same position. The employee authorized the connection. The platform never signed off on the agent itself. Under the two-test framework courts are now applying, that is a legal exposure. Not a technicality.

The inverse is also instructive. A customer-facing AI agent scoped to a specific document set — one that can only retrieve from what the organization has explicitly authorized it to access — passes the two-test framework by design. There is no ambiguity about what it knows or where it operates. The user can ask anything; the agent can only reach what it has been given permission to know. That is the architecture the ruling implicitly validates.

This maps directly to a gap that enterprise AI security has not closed: teams are building identity controls, behavior guardrails, and red-team testing. Most have not established explicit platform-level authorization for the agents running inside their own systems, let alone inside third-party ones.

The scoped agent isn't just a cleaner product design. It is increasingly the legally defensible one.

What comes next

Agent authorization law is not settled. The 9th Circuit could complicate things. A full trial is still pending, and the broader CFAA question — whether the act reaches AI agents acting under user direction — has never been resolved at the appellate level (SearchEngineJournal).

But the direction is consistent with what this court found and what Amazon has said publicly: user consent is necessary for agent operation. It is not sufficient. The gap between those two requirements is where enterprise legal risk is accumulating right now — in deployments that assumed the question was already answered.

It wasn't. And someone has to be the test case for when it is.

Related Resources

  • →Your AI Agents Have a Credentials Problem — And That's Only Half of It
  • →Enterprise AI Has Four Security Layers. Only Three Are Getting Built.
← Back to all posts