Epic Reads It. Oracle Writes It. Nobody's Checking If It's True.
Oracle's Clinical AI Agent now writes emergency department notes from prior records. What happens when AI reads AI-generated documentation — and nobody validates the loop?
Oracle Health announced at HIMSS26 this week that its Clinical AI Agent is now generally available for inpatient and emergency department settings across the US. AtlantiCare, a New Jersey health system, reported a 41% reduction in documentation time after deployment. Across all health systems using the product, the agent has saved more than 200,000 physician hours since launch. This is a real product doing real work. The efficiency case is closed.
That's the good news. Here's the part nobody connected.
The loop
Oracle's Clinical AI Agent generates notes by pulling from the existing EHR record — triage notes, medical history, labs, imaging results, overnight events, prior days' notes, medication updates. It synthesizes this with semantic reasoning and drafts today's clinical documentation.
The same afternoon Oracle made its HIMSS26 announcement, David Lareau, CEO of Medicomp Systems, walked to a different stage and described what happens when that chain breaks. "When AI-generated outputs are accepted without validation," he told attendees, "unsupported diagnoses, incomplete clinical context, and subtle inaccuracies can become part of the permanent record."
The permanent record. Not a draft. Not a suggestion flagged for physician review. The document that every future clinician, every future AI agent, every care transition summary will read as authoritative clinical truth.
Think about what this means structurally. Oracle reads from the EHR to write today's note. That note enters the EHR. Tomorrow, Oracle reads from the EHR again — now including yesterday's AI-generated note — to write tomorrow's note. If the first note contained a subtle inaccuracy, the second note inherits it. The third compounds it further. No external correction signal exists. The system is confident at every step.
This is the healthcare knowledge pollution loop. At HIMSS26 this week, nobody named it.
Why this wasn't visible before
Until recently, AI only read clinical documents. Writing was still a human task — tedious, time-consuming, but human. Automation came first to the reading side because it was easier: surfacing relevant history, flagging drug interactions, summarizing prior visits. Writing automation came later, and it arrived fast.
Lareau identified the specific failure mode that writing automation introduces. "An AI model may infer a condition based on incomplete contextual cues, summarize findings in ways that omit clinically relevant negatives, or generate documentation that appears coherent but lacks evidentiary support in the chart." Then the finding that should concern every system administrator rolling out clinical AI: "Because LLMs can produce different outputs from identical inputs, clinicians and organizations may have limited visibility into when information has drifted from the clinical truth."
Coherent but unsupported. In emergency medicine, that's a patient safety risk wearing a confidence mask.
The problem compounds at care transitions. Dr. Hamad Husainy, CMO of PointClickCare, called closing "information gaps across care transitions" the most important AI development in healthcare right now. He's describing the exact point where compounded errors cause the most damage. A patient leaves the emergency department for an inpatient bed. The hospitalist reads the ED note — AI-generated, subtly inaccurate — and builds on it. The patient transfers to a post-acute facility. The rehab team reads the inpatient note, which was built on the ED note. At each handoff, the error isn't caught; it gets cited.
Earlier this week, we covered Day 1 at HIMSS26 — how AI systems reading clinical documents surface their own accuracy problems. That piece was about AI reading and acting on clinical records. Oracle's announcement is a different failure vector: AI writing those records. Same conference, different problem, same missing layer underneath.
Seema Verma, EVP of Oracle Health, described the agent as designed to "automate draft note generation, reduce administrative load, and enable teams to stay present with patients while maintaining thorough, accurate records." The last part is where the gap lives. The agent's stated goal is accurate records. The question is whether the system has any mechanism to detect when the records it draws from have already drifted from accuracy — before they become the foundation for the next generated document.
The layer nobody is selling
Lareau was unusually direct for a conference speaker. "Leaders should also evaluate their data foundation and terminology infrastructure," he said. "AI systems operate on the data they receive."
He's describing a validation layer between what AI writes and what enters the permanent record. Something that checks whether the documents feeding the generation model are accurate, current, and internally consistent — before the output goes in the chart.
Oracle has the writing automation. Epic has agent orchestration on top of clinical workflows. PointClickCare is focused on care transition intelligence. None of them are selling the layer that sits between AI-generated content and the authoritative knowledge base it will later be read from. Mojar AI is built for exactly this: active knowledge management that detects when documents have drifted from the source of truth, surfaces contradictions across records, and maintains the accuracy of the document layer that clinical AI depends on.
The question HIMSS26 didn't answer
Oracle's 200,000 saved physician hours is real. The efficiency case for clinical AI documentation is strong and getting stronger. Nobody at HIMSS26 this week was seriously arguing against deploying these systems — the ROI is too clear, and the administrative burden on physicians is too well-documented.
But efficiency and accuracy are separate questions. The first one has been answered. At this conference, in this city, in the same afternoon, two vendors described a loop that could make errors permanent and a warning that organizations have limited visibility into when it's happening.
The deployment question has been answered. The accuracy question is still open.
Sources: Oracle Health Clinical AI Agent press release | Becker's Hospital Review | Healthcare IT News — Medicomp CEO | Healthcare IT News — PointClickCare CMO