Ask. Learn. Improve
Features
Real EstateData CenterMarketing & SalesHealthcareLegal Teams
How it worksBlogPricingLets TalkStart free
Start free
Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

Contact
Privacy Policy
Terms of Service

©2026. Mojar. All rights reserved.

Free Trial with No Credit Card Needed. Some features limited or blocked.

← Back to Blog
Industry News

ECRI Just Named AI the No. 1 Patient Safety Risk. The Missing Problem Is What Clinical AI Reads

ECRI ranked AI diagnostic risk #1 for 2026. With 81% of physicians using AI, the safety debate focuses on model bias. It's missing something bigger.

5 min read• March 19, 2026View raw markdown
healthcare AIpatient safetyECRIclinical AIknowledge management

What happened

On March 18, ECRI published its annual Top 10 Patient Safety Concerns for 2026 and put "Navigating the AI Diagnostic Dilemma" at the top of the list. Not #3. Not #7. First.

ECRI carries real weight here. This is the organization health systems use to evaluate medical devices, set safety benchmarks, and inform accreditation prep. When it ranks something #1, quality directors and compliance teams pay attention.

The concern is specific: AI is being used in diagnostic workflows before the field has figured out how to validate it at scale. ECRI's framing captures part of the problem — AI models are only as reliable as the algorithms behind them and the data on which they're trained. What the framing doesn't capture is where the data comes from once an AI is actually deployed and running in a hospital.

Why this matters right now

Physician AI adoption has moved from experiment to norm faster than most people outside the field appreciate. According to a March 2026 AMA survey, 81% of U.S. physicians now use AI professionally — up from 38% in 2023 (AMA / GlobeNewswire). The average physician uses 2.3 AI applications today, compared to 1.1 three years ago (Fierce Healthcare).

These aren't early adopters. This is a field-wide shift. The use cases driving adoption aren't exotic, either — physicians are using AI most heavily for medical research summarization and clinical care documentation, two workflows that depend entirely on what the AI is reading at the moment it generates a response.

88% of physicians still cite safety and efficacy validation as critical before they'd expand AI use further (AMA). That instinct is right. But the industry's answer to "validation" has mostly been about model performance: how the model was trained, what benchmarks it hit, whether it cleared FDA review.

Those are necessary questions. They're not sufficient ones.

The missing layer: what clinical AI actually reads

Two separate things can go wrong when AI produces a dangerous clinical output.

The first is model failure. The algorithm is wrong — biased training data, flawed architecture, inadequate testing. This is what most safety frameworks address, and it's getting serious regulatory attention.

The second is document failure. The model is working exactly as designed, but the knowledge it's retrieving or summarizing is outdated, contradictory, or incomplete. This problem barely shows up in the current safety conversation.

Many practical AI workflows in healthcare — summarization assistants, clinical documentation tools, policy Q&A systems, care coordination support — don't run purely on pretrained model weights. They pull from live documents at query time. Internal treatment protocols. Formularies. Admission criteria. Discharge guidance. Care pathway documentation. When a physician asks an AI tool to summarize the relevant standard of care for a given situation, what the tool reads matters as much as what the model knows.

Here's the problem: those documents change. Guidelines update. Formularies shift quarterly. Treatment protocols get revised after adverse events. In most health systems, no process exists to audit whether the documents feeding clinical AI reflect the current standard — or a version from 18 months ago.

We saw exactly this dynamic at HIMSS26, where clinical AI deployments were accumulating faster than anyone had mapped their documentation dependencies. Epic's agent infrastructure was expanding across 85% of U.S. healthcare while the knowledge accuracy question stayed unanswered. Oracle Health's clinical documentation workflow raised another version of the same problem: what happens when AI reads AI-generated notes, and nobody validates the chain. In both cases the model worked fine. The risk was in the inputs.

What document failure looks like in practice

This isn't hypothetical. Consider what goes wrong when an AI assistant retrieves stale guidance:

A physician asks the system to pull relevant dosing information for a medication class. The source document is an internal formulary that hasn't been updated since a revision nine months ago. The AI produces an accurate-sounding answer. The dosage it cites is no longer current.

That's not hallucination. The model performed correctly. The document failed.

Or consider contradictions: two departments maintain separate SOPs that diverge on post-procedure protocol after a care pathway was updated. The AI summarizes from both. The answer it returns is coherent but internally inconsistent, drawn from two documents that have never been reconciled.

Active knowledge management for clinical AI means treating source documentation as a live system that needs governance, not a static repository:

  • Auditing documents for staleness when external guidelines change
  • Detecting contradictions across internal documentation
  • Maintaining current source material so AI systems aren't reading yesterday's protocols
  • Treating document updates as a clinical safety event, not a routine IT task

This sits in the gap between model validation and conventional document management — which is precisely why it's easy to miss and nobody owns it yet.

What healthcare leaders should watch next

ECRI has a webinar scheduled for March 20 specifically on AI diagnostic risk. Expect another round of coverage focused on model validation, bias testing, and FDA pathways. That conversation is useful and necessary.

The harder question — governance of what clinical AI retrieves — doesn't have a framework yet. At HIMSS26, federal officials admitted healthcare AI operates with "few guardrails." That admission covered AI deployment broadly. It applies with particular force to the documentation layer, where the absence of standards is nearly complete.

Health systems serious about the ECRI finding will need to go past model validation. Auditing the accuracy of what their AI systems are actually reading is part of the answer. Mojar's view on this isn't novel: if the knowledge layer is broken, a well-validated model still produces wrong answers. Right now, very few organizations are auditing that layer at all.

ECRI named the risk. The full picture includes what the model is reading when it answers.

Related Resources

  • →Epic's Agent Factory Will Deploy AI Across 85% of US Healthcare. Who's Keeping the Knowledge Accurate?
  • →HIMSS26 Closed With a Confession: Healthcare AI Has 'Few Guardrails'
  • →Epic Reads It. Oracle Writes It. Nobody's Checking If It's True.
← Back to all posts