Ask. Learn. Improve
Features
Real EstateData CenterMarketing & SalesHealthcareLegal Teams
How it worksBlogPricing
LoginGet a demo
LoginGet a demo

Product

  • AI Agents
  • Workflows
  • Knowledge Base
  • Analytics
  • Integrations
  • Pricing

Solutions

  • Healthcare
  • Legal Teams
  • Real Estate
  • Marketing and Sales
  • Data Centers

Resources

  • Blog

Company

  • About
  • Contact
  • Privacy Policy
  • Terms of Service

©2026. Mojar. All rights reserved.

Built by Overseek.net

Free Trial with No Credit Card Needed. Some features limited or blocked.

©2026. Mojar. All rights reserved.

Built by Overseek.net

Free Trial with No Credit Card Needed. Some features limited or blocked.

← Back to Blog
Industry News

The Real Radiology AI Debate Is Not Replacement. It's Accountability.

NYC Health + Hospitals CEO says he's ready to replace radiologists with AI. The real question isn't replacement — it's whether hospitals have the governance infrastructure to back that claim.

6 min read• April 1, 2026View raw markdown
Healthcare AIRadiology AIAI GovernanceClinical AIKnowledge Management

What happened

Mitchell Katz, CEO of NYC Health + Hospitals — the largest public hospital system in the United States — said publicly that he is ready to replace "a great deal of radiologists with AI" once regulation allows it, Radiology Business reported.

The comment spread fast. Moneycontrol framed it as a direct conflict between cost and efficiency on one side, patient safety on the other. Radiologists pushed back online, warning that AI-only reads could risk patient outcomes.

This isn't the first time a hospital executive has gestured toward AI-assisted radiology. But there's a difference between "AI helps our radiologists" and "AI can replace a great deal of them." That difference matters enormously when things go wrong.

Why the language shift is worth taking seriously

Healthcare AI has operated under an augmentation frame for years: AI flags the anomaly, the physician makes the call, accountability stays with the human. Substitution language changes that frame entirely. If AI replaces the radiologist, AI is making the call. And once AI is making calls that shape diagnosis, the question of accountability — who answers when something goes wrong — becomes unavoidable.

That question isn't primarily about the model. It's about what the model reads, what protocols it applies in specific institutional contexts, and whether the supporting infrastructure makes any outcome auditable.

According to NVIDIA's 2026 State of AI in Healthcare survey, 70% of healthcare organizations are actively using AI, up from 63% in 2024. Medical imaging is among the most mature use cases — AI is already reading mammograms and chest X-rays in hospitals. The technology is real. The governance question is not settled.

What hospitals would actually need before this is credible

Strong benchmark performance on imaging tasks is table stakes. Making substitution credible — not just rhetorically bold — requires a different set of answers.

When an AI system reaches a finding, what protocol did it apply? Which version? From which source document? Clinical protocols change. Vendor guidance updates. A radiology AI system reading last year's procedure manual in a substitution scenario is a liability, not a solution.

Remove the radiologist and you also remove the default escalation layer. In augmentation workflows, the physician is who you escalate to when the AI isn't confident. In a substitution model, that escalation path needs to be defined, documented, and auditable before it's needed. Who gets flagged? On what criteria? Based on what documented threshold?

Then there's the contradiction problem. Hospital policy documents, vendor specifications, local clinical guidelines, and regulatory requirements often conflict with each other in ways nobody has fully mapped. In low-stakes AI applications, that's an annoyance. In diagnostic AI, an undetected contradiction in the knowledge base is a patient safety event waiting to happen.

And if an outcome is poor — if a finding was missed or a decision was wrong — the hospital needs to reconstruct what the AI saw, what it read, what it applied, and what it produced. Without a retrievable audit trail, accountability isn't just difficult. It's impossible.

None of these are model problems. They are knowledge infrastructure problems. Most hospital systems haven't solved them at the level current augmentation deployments require, let alone at the level substitution would demand.

This is a document problem as much as a model problem

The public argument about healthcare AI tends to focus on model performance: accuracy rates, false positive comparisons, benchmark results. Those metrics matter. But they don't tell you whether the model is operating on current protocols, whether the institutional knowledge it draws on is consistent, or whether anyone could reconstruct the reasoning chain after the fact.

Federal officials at HIMSS26 in March acknowledged that healthcare AI is running with "few guardrails" — a candid admission when over 1,300 AI-enabled medical devices are already FDA-authorized. ECRI named AI the No. 1 patient safety risk for 2026, and part of what makes that risk real is that the documentation layer beneath clinical AI deployments is often inconsistent, outdated, or ungoverned.

Radiology is document-heavy by nature: imaging protocols, procedure guidelines, escalation criteria for specific findings, local policy documents, vendor-specific instructions for each modality. When AI shapes or replaces radiologist judgment, all of that documentation becomes part of the accountability chain. Most of it isn't managed at a standard that would hold up under scrutiny.

The same pattern appeared with AI scribes: the model conversation dominated while the documentation quality beneath it stayed broken. Radiology AI is the same dynamic at higher clinical stakes.

The knowledge governance question that's not being asked

If hospital leadership wants to talk publicly about AI substitution, it also needs to talk publicly about the governed evidence and protocol infrastructure that makes accountability possible.

That means source-attributed outputs — every finding traceable to the policy or guideline applied. It means contradiction-aware retrieval — surfacing conflicts between documents before they silently shape AI decisions. It means current protocol access: not last year's procedure manual, but the version in effect today. And it means audit records that survive long enough to be useful when they're needed.

This is where the knowledge layer becomes the real question — not whether the model can read a scan accurately, but whether the institutional knowledge the model operates on is current, consistent, and traceable. Platforms like Mojar AI are built for exactly this infrastructure layer: source-attributed retrieval, contradiction detection across documents, version control for clinical protocols, and audit records that make accountability possible after the fact.

Diagnostic AI that's accurate but operating on ungoverned knowledge isn't safer because the model is good. It's just wrong in harder-to-catch ways.

What to watch

Radiologist professional organizations, hospital operators, and regulators will respond to Katz's comments in the coming days. Watch whether the response focuses on model capability — the easier argument — or pushes back on governance readiness, which is the harder and more important conversation.

What Katz said is probably closer to an opening negotiating position than an operational plan. But the direction is clear: hospital leadership is beginning to think about substitution, not just assistance. The governance infrastructure question hasn't caught up. That gap is the real story here, and nobody's protesting it yet.

Frequently Asked Questions

Current AI shows strong results in narrow imaging tasks like mammography screening. But replacing radiologists requires more than model accuracy — it requires protocol governance, evidence provenance, escalation workflows, and auditable accountability chains that most hospital systems haven't built.

Clinical AI workflows need current protocol access, source-attributed outputs, contradiction detection across policy documents, versioned audit trails, and clear escalation logic. Without these, AI outputs touching diagnosis lack accountability even when the model itself performs well.

Governed knowledge is the documentation layer beneath clinical AI — protocols, vendor guidance, escalation criteria, local policies — that is version-controlled, source-attributed, and kept current. AI operating on ungoverned knowledge can be accurate on a benchmark and wrong in practice.

Related Resources

  • →ECRI Just Named AI the No. 1 Patient Safety Risk. The Missing Problem Is What Clinical AI Reads
  • →Healthcare AI Has a Trust Infrastructure Problem
  • →HIMSS26 Closed With a Confession: Healthcare AI Has 'Few Guardrails'
← Back to all posts