Public-Records AI Is Becoming a Document-Governance Test
FOIA offices are turning to AI under backlog pressure. The harder problem isn't staffing — it's whether the records layer beneath the AI is governable.
The situation in one sentence
Federal FOIA offices are bleeding staff, drowning in requests, and reaching for AI to fill the gap — while a live controversy in Michigan is showing exactly what happens when the governance layer isn't ready.
What happened
Three things converged this week to make public-records processing a genuine AI governance story.
First, the annual FOIA reports are in. The Department of Defense's backlog rose 42% to more than 30,000 cases by the end of FY2025 (DoD annual FOIA report). The driver: a 37% loss or turnover in FOIA officers (DoD chief FOIA officer report), with some components down to a single person and DTIC's FOIA staff reduced entirely to zero. The Commerce Department's backlog climbed over 500 requests. Education's nearly doubled.
Second, DOJ's Office of Information Policy held a Sunshine Week webinar on March 18. Director Sean Glendening told attendees, "FOIA is kind of the ultimate big data problem to solve, which is what AI is great at" (DOJ OIP blog). DoD is already working with its CIO on "agentic, generative and/or predictive artificial intelligence" for FOIA processing.
Third, the Detroit Free Press reported that Michigan's Department of Technology, Management and Budget responded to a FOIA request with a cost estimate that included $739.50 for a "FOIA review tool." That tool turned out to be Relativity, a litigation eDiscovery platform, provided through a state contract with Epiq eDiscovery Solutions (Detroit Free Press). The problem: the contract wasn't written for FOIA processing. Charges for AI tooling don't appear in Michigan FOIA law's list of permissible fees. The contractor wasn't identified on the invoice, as state law requires.
This is what AI-assisted public records looks like when the governance hasn't caught up to the technology.
Why it matters
FOIA and public-records law exist to hold government accountable. Every step in the process — which records were searched, what gets withheld, why, how much the requester is charged — is supposed to be explainable and auditable.
AI changes the mechanics of that workflow. When an algorithm decides which documents are responsive, a different class of question follows: how did it decide? What sources did it search? What did it not find? When it redacted a passage, what rule applied?
The Michigan case isn't hypothetical. It shows agencies deploying AI-adjacent tools in FOIA responses without properly disclosing the tooling, without confirming it's legally permissible, and without being able to explain the cost basis.
The National Archives has been watching this space closely. Its NexGen FOIA Tech Showcase 3.0 is specifically soliciting AI-supported solutions for case processing, redaction, eDiscovery, and records-management connectivity (NARA/OGIS RFI). That signals where federal FOIA is headed: faster, AI-mediated, and carrying governance requirements that most agencies aren't ready for.
The same pattern appears in enterprise document workflows. AI prompts, outputs, and retrieval logs are already becoming records problems in regulated industries. FOIA makes the failure visible faster because requesters can challenge the response directly and in public.
The breakdown
Backlog pressure is pushing FOIA toward AI before agencies are ready
The numbers aren't going the other way. Federal News Network reported that most federal FOIA offices are now turning to AI and automation to cover staffing gaps — but those efforts are early and have not moved the needle. A 42% backlog increase at DoD, with staff down 37%, means the gap between incoming requests and processing capacity is widening. AI is being adopted out of necessity, not from a position of operational readiness.
That's not a great way to introduce AI into a legally sensitive workflow.
Search quality becomes a transparency problem
A FOIA response depends on finding the right records. When AI assists with search — filtering by relevance, ranking results, surfacing matches — a new question enters the picture: can the agency explain what was searched and how?
If a requester believes responsive records were missed, they can challenge the adequacy of the search. That challenge is now technical: what query did the model use? What did its retrieval layer access? Was there a document repository it couldn't parse?
Missing records in a FOIA response is a legal problem. When the search was AI-assisted, that problem is also a records architecture problem.
AI redaction is only defensible if the review trail is reconstructable
AI-assisted redaction is probably where the stakes are highest. Over-redact and you're withholding information the public has a right to see. Under-redact and you may be releasing protected personal data, law enforcement sensitive material, or national security information.
Manual redaction has a paper trail: the reviewer saw the passage, applied a specific exemption, made a judgment call. With AI-assisted redaction, the exemption logic runs inside the tool. When challenged — and FOIA redactions are challenged — the agency needs to explain that logic. That requires reconstructable review trails: what the model saw, what decision it made, and what rule it applied.
Most FOIA offices deploying these tools right now are thinking about throughput. The audit trail comes later, usually when there's already a dispute.
Tooling costs and contractor use open a new legal front
The Michigan situation is one case — but it's not going to be the last. When agencies bring third-party AI tools into FOIA processing, several questions open up at once:
- Is the tooling cost a permissible FOIA fee, or does the agency absorb it?
- Does contractor use need to be disclosed on the invoice?
- If a contractor processes records, does that create privilege or chain-of-custody complications?
- Were agency employees capable of doing the work without the tool? Michigan law requires agencies to answer yes before contracting out.
PEER's information request about EPA's use of AI in chemical assessments (National Law Review) signals that oversight organizations are already watching for exactly these issues at the federal level.
The real bottleneck is the records layer underneath the AI
AI can only find records that are findable. If an agency's documents are scattered across file shares, email systems, shared drives, and legacy repositories with inconsistent metadata — which describes most government IT environments — an AI-assisted search is only as good as the records architecture underneath it.
Scanned PDFs that can't be parsed. Emails with no subject lines. Documents versioned informally as "final_v3_FINAL_revised.docx." Policies contradicted by later policies that nobody reconciled.
The AI can process faster. Speed applied to a broken records layer produces faster wrong answers. This is the FOIA equivalent of a knowledge management failure, and it won't be solved by adding a better model on top.
What this means for regulated document workflows
The governance problems FOIA is hitting aren't exclusive to government. The same failure mode shows up anywhere AI touches regulated document review.
eDiscovery: which documents did the AI assess as non-responsive, and how was that determination made? Data subject access requests: is the AI finding all personal data across every system it should be searching? Internal investigations: can you reconstruct what the AI-assisted review actually examined? Compliance reviews: if an auditor challenges a finding, can you show what the model read?
What holds across all of these is the same requirement: clean repositories with consistent structure, parsing that handles messy legacy formats, source attribution on every retrieval, and review trails that survive a legal challenge. As we've written before, guardrails aren't enough when regulators need you to prove what your AI actually saw.
For organizations building AI into document review workflows, the knowledge layer has to come before the AI layer. A platform like Mojar AI handles scanned PDFs and legacy document formats, attributes sources on every retrieval, and detects contradictions across document sets — the architecture that makes AI-assisted review auditable, not just faster.
What to watch
DOGE-era staffing cuts are widening federal FOIA backlogs faster than AI adoption can offset them. The NexGen Showcase will surface what vendors are pitching to NARA. The Michigan controversy will likely produce a formal legal opinion on whether AI tooling charges are permissible under state public-records law. And at least one federal agency will face a lawsuit over AI-assisted redaction within the next 18 months.
The governance question is no longer theoretical. It's in the cost estimates already going out to requesters.