Ask. Learn. Improve
Features
Real EstateData CenterMarketing & SalesHealthcareLegal Teams
How it worksBlogPricingLets TalkStart free
Start free
Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

← Back to Blog
Industry News

Courts Are Not Debating AI Anymore. They're Debating Provenance.

L.A. Superior Court is piloting AI for case review, summarization, and draft rulings. The real governance question isn't whether AI is accurate—it's whether its document trail is defensible.

6 min read• March 22, 2026View raw markdown
Legal AIJudicial TechnologyAI GovernanceDocument ManagementCourt Operations

The question has changed

For the last two years, legal commentary about AI circled the same abstract debate: should courts use AI at all? That debate is over.

In February 2026, the Superior Court of Los Angeles County launched a pilot giving civil judges access to AI software called Learned Hand. The tool summarizes hundreds of pages of legal motions, conducts legal research, analyzes cases, and drafts tentative rulings. It's already running in court systems across 10 states. The Michigan Supreme Court has been using it since last summer.

The policy question has been decided. Courts are using AI. The question now is whether the governance layer beneath that decision has been worked out.

Why this moment matters

L.A. Superior Court handles approximately 1.2 million new filings per year across 36 courthouses, with roughly 600 judicial officers (Los Angeles Times). When AI touches that volume of caseload — summarizing motions, drafting rulings, flagging precedents — even a modest error rate compounds fast.

Court leadership has been unambiguous about the boundaries: judges must review and edit every AI-generated output before it becomes a tentative ruling. "Generative AI will not make judicial decisions or replace judicial discretion," said David Slayton, Executive Officer and Clerk of Court, in the program announcement. Any use is "limited to administrative or research support for judicial officers, and only with appropriate safeguards" (Daily News).

That's a clear policy position. The harder operational question is what it requires in practice. When a judge reviews an AI-drafted ruling, can they actually verify what the system read, what it weighted, and what it might have missed?

The four questions every court AI deployment has to answer

What did the system actually read?

Every AI-generated legal analysis starts with a corpus of documents — case filings, exhibits, motions, precedents, procedural rules. The issue isn't just whether those documents were accurate; it's whether the corpus was scoped correctly. A retrieval system pulling from too broad a document set might surface precedents from the wrong jurisdiction, mix public materials with sealed ones, or retrieve superseded procedural guidance.

Which documents an AI system can access, and enforcing those limits at the retrieval level, is a document infrastructure problem — not something handled by model quality alone.

Can generated text be traced back to specific source evidence?

The National Center for State Courts has put source verification at the center of its practitioner AI guidance, specifically because hallucination risk is real in legal workflows. A factual claim in an AI-drafted ruling that can't be tied to a specific exhibit or cited case isn't an AI quirk — it's a procedural failure. Courts require that factual conclusions have evidentiary support. Outputs with no source attribution make that impossible to satisfy.

This is why source-linked retrieval — where every AI output maps back to the specific document and passage it came from — is becoming the floor expectation for judicial AI, not a feature. According to Intelligent CIO, Learned Hand's platform links all outputs to underlying case materials and runs verification checks before surfacing anything to a user. That design commitment is now the standard every judicial AI system will be measured against.

How are sealed and non-public materials handled?

Courts work with sealed filings, confidential settlements, juvenile records, and attorney-client communications. If a retrieval system's document boundaries aren't enforced at the ingestion level, a research query in one case could surface materials that belong to another. This has to be solved before deployment. It's a scoping and access-control requirement, not something you patch after an incident.

What gets logged for later review?

Lawyers have already raised doubts about proposed federal rules governing AI-generated evidence (Reuters). That debate isn't settled. But the operational expectation is forming: if AI influenced a ruling, there should be a way to reconstruct what the system saw and what it surfaced. That means retention policies for retrieval logs and audit-friendly document operations — chain-of-custody thinking applied to knowledge workflows.

This is the same challenge enterprises are hitting in discovery and records management: AI logs are becoming evidentiary artifacts, and most organizations weren't built to treat them that way.

What this means for court operations

The L.A. pilot and the judges' consortium — U.S. judges have formed a group to share AI lessons, risks, and usage patterns rather than develop policies in isolation — signal that judicial AI is moving from cautious experimentation toward institutional adoption. Standardization follows.

That means courts and legal technology vendors will be evaluated against a checklist that isn't primarily about model accuracy. Document scoping, source attribution, sealed-material enforcement, audit trail reconstruction — these are infrastructure requirements. A system that retrieves from well-governed, source-attributed document sets hallucinated less and leaves an evidence trail that can be verified. The cost of getting this wrong is already measurable in sanctions.

Courts also work with filings in every conceivable format — scanned PDFs, handwritten exhibits, legacy formats, digital-native submissions. Before an AI system can reliably summarize or analyze this material, it has to read it cleanly. That's not a given. Many legal repositories contain low-quality scans that standard parsing pipelines fail on.

The knowledge layer is the defensibility layer

The requirements for trustworthy judicial AI — approved document scopes, source-attributed retrieval, reliable parsing of varied and messy document formats, contradiction management across guidance materials, audit-friendly operations — are the same requirements that serious RAG deployments face in any regulated domain. Platforms like Mojar AI are built around these constraints: hybrid document ingestion that handles scanned and degraded PDFs, source citation on every retrieval, and knowledge base governance that keeps document sets consistent and current.

The point isn't that courts should adopt any particular platform. It's that this class of capability — governed document infrastructure beneath the AI layer — is what separates deployments that hold up under scrutiny from those that don't. Access controls and model safety aren't sufficient. The evidence layer — what the AI saw, where it came from, and why it surfaced — is the defensibility requirement.

What to watch

Disclosure requirements for AI-assisted rulings remain incomplete. Courts and bar associations are still working through how and when judges must disclose AI involvement. Retention and audit rules for retrieval logs haven't been standardized. The Philippines Supreme Court approved an AI regulatory framework for the judiciary in March 2026 — other jurisdictions are moving on their own timelines.

The L.A. pilot's findings will directly inform how other large court systems design their governance frameworks. Courts aren't evaluating whether AI is accurate enough. They're deciding whether its document trail is defensible enough. Those are different questions, and the second one is harder.

Frequently Asked Questions

In February 2026, L.A. Superior Court gave a small group of civil judges access to Learned Hand, an AI tool that summarizes filings, conducts legal research, and drafts tentative rulings. Judges must review and edit all AI outputs before adoption. The court serves over 10 million residents and processes approximately 1.2 million new cases per year.

When AI drafts or informs a legal ruling, every institution involved needs to verify what source materials the system relied on. A ruling that can't be traced to specific record evidence is not defensible. Provenance — the auditable chain from AI output back to the documents it read — is now the practical governance requirement in court AI deployments.

The National Center for State Courts has published practitioner guidance focused on hallucination identification, source verification, and responsible use in legal workflows. Court pilots like L.A.'s require judges to review and edit every AI-generated output. A growing consortium of U.S. judges is sharing lessons across jurisdictions to build shared risk frameworks.

Related Resources

  • →Three Courts. One Week. Legal AI Hallucinations Just Got Expensive.
  • →Guardrails Aren't Enough. Enterprises Need to Prove What Their AI Saw.
  • →AI Prompts, Outputs, and Retrieval Logs Are Becoming Records Problems
← Back to all posts